Category Archives: scientific evidence

Banging your head against walls of bullshit

After watching – and briefly being at the receiving end of – the latest bullshit tsunami about the Covid vaccines, I’ve decided to write another blog post1 about scepticism and the value of scientific expertise. And then I realised I had already written that post a year ago :P. That’s great – it saves me time that I don’t really have in the first place! So instead I decided to list a few heuristics in the style of Occam’s Razor. While I have slim hope, perhaps they will help in fighting the bullshit in which we are drowning.

  • Snopes Razor: When your racist relative/former highschool classmate/contrarian ex-colleague posts something on Facebook, there is probably already a Snopes.com article debunking it.
  • Soros Razor: If they say that fact-checkers like Snopes or the BBC cannot be trusted because “they are in the pocket of George Soros/Bill Gates/Satanists/Aliens/Jewish Space Lasers”, don’t waste your breath and block them at once.
  • 360 Razor: Whenever the bullshitter pleads with you to “look at all the evidence from every angle”, they usually want you to look only at their evidence from their angle.
  • Majority Razor: When 9 out of 10 experts agree that something is bullshit, it probably is.2
  • NOFX Razor: Then again, just because everyone in your bubble believes the bullshit doesn’t make it right.3
  • Expert Razor: The bullshitter’s “scientific expert” has a PhD but it’s in a field that has nothing to do with the topic in question and/or their professorship is at a non-existent university.
  • Read-on Razor: If the bullshitter posts the title of a scientific article to make their point, read the abstract. If they post the abstract, read the whole article. It will inevitably say the exact opposite to what the bullshitter claims it does.
  • Extrapolation Razor: The bullshitter’s main thesis is inevitably based on a pretty wild misconstrual or outright deliberate distortion of something someone said or did in good faith.4
  • Oversimplification Razor: The bullshitter’s understanding of scientific evidence will not only be limited but overly simplistic without room for relative evidence or nuance.5
  • The Rabbit Hole: If a bullshitter bullshits about one bullshit they inevitably bullshit about other bullshit, too. And the bullshitting will only get worse. Bullshit begets bullshit.

1) How obvious is it that I’m getting as much of this out of my system before I let the domain expire and retire from blogging for good…?

2) It’s true, a single brilliant insight can revolutionise our understanding. But Galileo did that because he was right, not because he defied the establishment. The truth will out eventually. And on this note:

3) Majority rule don’t work in mental institutions. Sometimes the smallest softest voice carries the grand biggest solutions.

4) For a first-hand demonstration of this phenomenon, google “Bill Gates vaccine population control”. Then after reading for a bit you’ll want to take a good shower to scrub off that icky feeling. (If you’re in Auckland, keep it under 4 minutes though – it may be under some control now but we still have a water shortage…)

5) The interpretation of data is also highly asymmetric. To the bullshitter, 95% efficacy means “ineffective” while 0.00001% constitutes a “mortal risk”. That’s for example why “Vaccines don’t stop transmission”. The reason you don’t know anyone who died of smallpox or polio is sunspots. Incidentally, sunspots are also to blame for climate change of course. Except for the climate change that escaped from a Chinese climate lab.

Source: Anynobody, Wikimedia Commons

Uncontrollable hidden variables in recruiting participants

Here is another post on experimental design and data analyses. I need to get this stuff out of my system before I’m letting the domain on this blog expire at the end of the year (I had planned this for last year already but then decided to keep it going because of a certain difficult post I must write then…)

Hidden group differences?

This (hopefully brief) post is inspired by a Twitter discussion I had last night. These people have had a journal club about one of my lab ‘s recent publications. The details of that are not really important for this post – you can read their excellent questions about our work and my replies to them in the tweet thread. However, what this discussion reminded me of is the issues you can run into when dealing with human volunteer participants that you have no control over and – what is worse – you may not even be aware of.

In this particular study, we compared retinotopic maps from groups of identical (MZ) and fraternal (DZ) twin pairs. One very notable point when you read our article is that the sample sizes for the two groups are quite different, with more MZ twin pairs than DZ pairs. We had some major difficulties finding DZ twins to take part and what made matters worse is that we had to reclassify several purported DZ twins to MZ twins after genetic testing. Looking at the literature, this seems quite common. For example, we found that in the Human Connectome Project there is a similar imbalance in the sample sizes (see for instance this preprint that also looked at retinotopic maps in twins at a more macroscopic level). A colleague of ours working on another twin study experienced the same problem (I don’t think this study has been published yet). Finally, here is just one more example of a vision science study with substantially greater sample for MZ than DZ twins.

There are clearly problems recruiting DZ twins. Undoubtedly MZ twins are more “special”, and so there are organisations through which they can be reached. While there are participant pools for twins that contain both zygosities, the people managing these can be rather protective. This is understandable because these databases are a valuable scientific resource and they don’t want to tire out their participants by allowing too many researchers to approach them with requests to participate. These pools of participants may also be imbalanced because MZ self-select into them because they have a strong interest to learn about how similar they are. In contrast, DZ twins may have less interest in this question (although some obviously do). And even if you have a well-balanced pool of potential participants, there may be additional social factors at play. The MZ twins in those pools may be keener to take part than the DZ twins. Of course, the zygosity may also interact in hidden ways with this. MZ twins might have a closer relationship to one another, even if just being more geographically closer, and that will doubtless affect how easy it is for them to participate in your study. All these issues are extremely difficult to know about, let alone to control.

Not about twins

As I said, the details of our study aren’t really important and this post isn’t about twins. Rather, this is clearly a broader issue. Similar concerns affect any comparison between groups of participants. Anyone studying patients with a particular condition is probably familiar with that issue. Many patients are keen to take part in studies because they have an interest in better understanding their condition or – for some disorders or illnesses – contributing to the development of treatments. In contrast, recruiting the “matched” control participants can be very difficult and you may go to great and unusual lengths to find them. This can result in your control group being quite unusual compared to the standard participant sample you might have when you do fundamental research, especially considering a lot of such research is done on young undergraduate students recruited on university campuses.

Let’s imagine for example that we want to understand visual processing by professional basketball players in the NBA. A quick Googling suggests the average body height of NBA players is 1.98 m, considerably taller than the average male. Any comparison that does not take this into account would confound body height with basketball skill. Obviously you can control aspects like this to some extent by using covariates (e.g. in multiple regression analyses) – but that is only for the variables you know about. More importantly, you’d be well-advised to recruit a matched control group that has similar body height as your basketball players but without the athletic skills. That way you cancel out the effect of body height.

But how does this recruitment drive interact with your samples? For one thing, it will probably be difficult to find these tall controls. While most NBA players are very tall (even short NBA players are presumably above average height), really tall people in the general population are rare. So finding them may take a long time. But what is worse, the ones you do find may also differ in other respects from your average person. For body height, this may not be too problematic but you never know what issues a very tall person faces who doesn’t happen to have a multi-million dollar athletic contract.

These issues can be quite nefarious. For instance, I was involved in a study a few years ago where we must recruit control participants matched to our main group of interest both in terms of demographic details and psychological measures. What we ended up with was a lot of exclusions of potential control partipants due to drug use, tattoos or metal implants (a safety hazard), and in one case an undisclosed medical history we only discovered serendipitously. The rationale for selecting participants with particular matched traits from the general population is based on the assumption that these traits are random – however, this fails if there is some hidden association between that trait and other confounding factors. In essence, this is just another form of selection bias that I have written about recently…

The problem is there is simply no good way to control for that. You cannot use a variable as covariate when you don’t know it exists. This means that particular variable simply becomes part of the noise, the variance not explained by your model. It is entirely possible that this noise masquerades as a difference that doesn’t really exist (Type I error) or obscures true effects (Type II error). You can and should obviously check for potential caveats and thus establish the robustness of the findings but that can only go so far.

Small N designs

This brings me back to another one of my pet issues: small N designs, as are common in psychophysics. Some psychophysics experiments have as few as two participants, both of whom are also authors of the publication. It is debatable how valid this extreme might be – one of my personal heuristics is that you should always include some “naive” observers (or at least one?) to show that results do not crucially depend on knowledge of the hypothesis. But these designs can nevertheless be valid. Many experiments are actually difficult for the participant to influence through mere willpower alone. I’ve done some experiments on myself where I thought I was responding a certain way only to find the results didn’t reflect this intuition at all.

And there is definitely something to be said about having trained observers. I’ve covered this topic several times before so I won’t go in detail on this. But it doesn’t really make sense to contaminate your results with bad data. A lot of psychophysical experiments require steady eye gaze to ensure that stimuli are presented at the parafoveal and peripheral locations you want to test. It doesn’t make much sense to include participants who cannot maintain fixation. (On that note, it is interesting that some results can actually be surprisingly robust even in the presence of considerable eye movements – such as what we found in this study. This opens up a number of questions as to what those results mean but I have not yet figured out a good way to answer them…).

This is quite different from your typical psychology experiment. Imagine you want to test (bear with me here) how fast your participants walk down the corridor after leaving your lab cubicle where you had them do some task with words… While there may be some justified reasons for exclusion of participants (such as that they obviously didn’t comply with your task instructions or failed to understand the words or that they get an urgent phone call that caused them to sprint down the hall), there is no such thing as a “trained observer” here. You want to make an inference about how the average person reacts to your experimental manipulation. Therefore you need to use a statistical approach that tests the group average. We don’t want only people who are well trained at walking down corridors.

In contrast, in threshold psychophysics you don’t care about the “average person” but rather you want to know what the threshold performance is after all that other human noise – say inattention, hand-eye-coordination, fixation instability, uncorrected refractive error, mind-wandering, etc – has been excluded. Your research question is what is the just noticeable difference in stimuli under optimal conditions, not what is the just noticeable difference when distracted by thoughts about dinner or your inability to press the right button at the right time. A related (and more insidious) issue is also introspection. One could make the argument that many trained observers are also better judging the contents of their perceptual awareness than someone you recruited off the street (or your student participant pool). A trained observer may be quite adept at saying that the grating you showed appeared to them tilted slightly to the left – your Average Jo(sephine) may simply say that they noticed no difference. (This could in part be quantified by differences in response criterion but that is not guaranteed to work).

Taken together, the problem here is not with the small N approach – it is doubtless justified in many situations. Rather I wonder how to decide when it is justified. The cases described above seem fairly obvious but in many situations things can be more complicated. And to return to the main topic of this post, there could be insidious interactions between finding the right observers and your results. If I need trained observers for a particular experiment but I also want to find some who are naive to the purpose of the experiment, my inclusion criteria may bias the participants I end up with (this usually means your participants are all members of your department :P). For many purposes these biases may not matter. In some cases they probably do – for instance reports that visual illusions differ considerably in different populations. Ideally you want trained observers from all the groups you are comparing in this case.

Everything* you ever wanted to know about perceived income but were afraid to ask

This is a follow-up to my previous post. For context, you may wish to read this first. In that post I discussed how a plot from a Guardian piece (based on a policy paper) made the claim that German earners tend to misjudge themselves as being closer to the mean or, in the authors’ own words, “everyone thinks they’re middle class“. Now last week, I simply looked at this in the simplest way possible. What I think this plot shows is simply the effect of transforming a normal-ish data distribution into a quantile scale. For reference, here is the original figure again:

The column on the left aren’t data. They simply label the deciles, 10% brackets of the income distribution. My point previously was that if you calculate the means of the actual data for each decile you get exactly this line-squeeze plot that is shown here. Obviously this depends on the range of the scale you use. I simply transformed (normalised) the income data into a 1-10 scale where the maximum earner gets a score of 10 and everyone else is below. The point really is that in this scenario this has absolutely nothing to do with perceiving your income at all. It is simply plotting the normalised income data and you produce a plot that is highly reminiscent of the real thing.

Does the question matter?

Obviously my example wasn’t really mimicking what happens with perceived income. By design, it wasn’t supposed to. However, this seems to have led to some confusion about what my “simulation” (if you can even call it that) was showing. A blog post by Dino Carpentras argues that what matters here is how perceived income was measured. Here I want to show why I believe this isn’t the case.

First of all, Dino suggested that if people indeed reported their decile then the plot should have perfectly horizontal lines. Dino’s post already includes some very nice illustrations of that so I won’t rehash that here and instead encourage you to read that post. A similar point was made to me on Twitter by Rob McCutcheon. Now, obviously if people actually reported their true deciles then this would indeed be the case. In this case we are simply plotting the decile against the decile – no surprises there. In fact, almost the same would happen if they estimated the exact quantile they fall in and we then average that (that’s what I think Rob’s tweet is showing but I admit my R is too rusty to get into this right now).

My previous post implicitly assumed that people are not actually doing that. When you ask people to rate themselves on a 1-10 scale in terms of where their income lies, I doubt people will think about deciles. But keep in mind that the actual survey asked the participants to rate exactly that. Yet even in this case, I doubt that people are naturally inclined to think about themselves in terms of quantiles. Humans are terrible at judging distributions and probability and this is no exception. However, this is an empirical question – there may well be a lot of research on this already that I’m unaware of and I’d be curious to know about it.

But I maintain that my previous point still stands. To illustrate, I first show what the data would look like in these different scenarios if people could indeed judge their income perfectly on either scale. The plot below is showing what I used in my example previously. This is a distribution of (simulated) actual incomes. The x-axis shows the income in fictitious dollars. All my previous simulation did was to normalise so the numbers/ticks on the x-axis are changed to be between 1-10 but all the relationships remain the same.

But now let us assume that people can judge their income quantile. This comes with a big assumption that all survey respondents even know what that means, which I’d doubt strongly. But let’s take that granted that any individual is able to report accurately what percentage of the population earns less than them. Below I plot that on the y-axis against the actual income on the x-axis. It gives you the characteristic sigmoid shape – it’s a function most psychophysicists will be very familiar with: the cumulative Gaussian.

If we averaged the y-values for each x-decile and plotted this the way the original graph did, we would get close to horizontal lines. That’s the example I believe Rob showed in his tweet above. However, Dino’s post goes further and assumes people can actually report their deciles (that is, answer the question the survey asked perfectly). That is effectively rounding the quantile reports into 10% brackets. Here is the plot of that. It still follows the vague sigmoid shape but becomes sharply edged.

If you now plotted the line squeeze diagram used in the original graph, you would get perfectly horizontal lines. As I said, I won’t replot this; there really is no need for it. But obviously this is not a realistic scenario. We are talking about self-ratings here. In my last post I already elaborated on a few psychological factors why self-rating measures will be noisy. This is by no means exhaustive. There will be error on any measure, starting from simple mistakes in self-report or whatever. While we should always seek to reduce the noise in our measurements, noisy measurements are at the heart of science.

So let’s simulate that. Sources of error will affect the “perceived income” construct at several levels. The simplest we can do to simulate it is an error on how much the individual thinks their actual income is – we take each person’s income and add a Gaussian error. I used a Gaussian with SD=$30,000. That may be excessive but we don’t really know that. There is likely error in how high people think their income is relative to their peers and general spending power. Even more likely, there must be error on how they rate themselves on the 1-10 decile scale. I suspect that when transformed back into actual income this will be disproportionally larger than the error on judging their own income in dollars. It doesn’t really matter in principle.

Each point here is a simulated person’s self-reported income quantile plotted against their actual income. As you can see while the data still follow the vague sigmoid shape, there is a lot of scatter in people’s “reported” quantiles compared to what it actually should be. For clarity, I added a colour code here which denotes the actual income decile each person belongs to. The darkest blue are the 10% lowest earners and the yellow bracket is the top earners.

Next I round people’s income to simulate their self-reported deciles. The point of this is to effectively transform the self-reports into the discrete 1-10 scale that we believe the actual survey respondents used (I still don’t know the methods and if people were allowed to score themselves a 5.5 for instance – but based on my reading of the paper the scale was discrete). I replot these self-reported deciles using the same format:

Obviously, the y-axis will now again cluster in these 10 discrete levels. But as you can see from the colour code, the “self-reported” decile is a poor reflection of the actual income bracket. While a relative majority (or plurality) of respondents scoring themselves 1 are indeed in the lowest decile, in this particular example some of them are actual top earners. The same applies to the other brackets. Respondents thinking of themselves as perfectly middle class in decile 5 actually come more or less equally from across the spectrum. Now, again this may be a bit excessive but bear with me for just a while longer…

What happens when we replot this with our now infamous line plots? Voilà, doesn’t this look hauntingly familiar?

The reason for this is that perceived income is a psychological measure. Or even just a real world measure. It is noisy. The take-home message here is: It does not matter what question you ask the participants. People aren’t computers. The corollary of that is that when data are noisy the line plot must necessarily produce this squeezing effect the original study reported.

Now you may rightly say, Sam, this noise simulation is excessive. That may well be. I’ll be the first to admit that there are probably not many billionaires who will rate themselves as belonging to the lowest decile. However, I suspect that people indeed have quite a few delusions about their actual income. This may be more likely to affect the people in the actual middle range perhaps. So I don’t think the example here is as extreme as it may appear at first glance. There are also many further complications, such as that these measures are probably heteroscedastic. The error by which individuals misjudge their actual income level in dollars is almost certainly greater for high earners. My example here is very simplistic in assuming the same amount of error across the whole population. This heteroscedasticity is likely to introduce further distortions – such as the stronger “underestimation” by top earners compared to the “overestimation” by low earners, i.e. what the original graph purports to show.

In any case, the amount of error you choose for the simulation doesn’t affect the qualitative pattern. If people are more accurate at judging their income decile, the amount of “squeezing” we see in these line plots will be less extreme. But it must be there. So any of these plots will necessarily contain a degree of this artifact and thus make it very difficult to ascertain if this misestimation claimed by the policy paper and the corresponding Guardian piece actually exists.

Finally, I want to reiterate this because it is important: What this shows is that people are bad at judging their income. There is error on this judgement, but crucially this is Gaussian (or semi-Gaussian) error. It is symmetric. Top earner Jeff may underestimate his own income because he has no real concept of how the other half** live. In contrast, billionaire Donny may overestimate his own wealth because of his fragile ego and he forgot how much money he wastes on fake tanning oil. The point is, every individual*** in our simulated population is equally likely to over- or under-estimate their income – however, even with such symmetric noise the final outcome of this binned line plot is that the bin averages trend towards the population mean.

*) Well, perhaps almost everything?

**) Or to be precise, how the other 99.999% live.

***) Actually because my simulation prevents negative incomes for the very lowest earners, the error must skew their perceived income upwards.

Matlab code for this simulation is available here.

It’s #$%&ing everywhere!

I can hear you singing in the distance
I can see you when I close my eyes
Once you were somewhere and now you’re everywhere


Superblood Wolfmoon – Pearl Jam

If you read my previous blog post you’ll know I have a particular relationship these days with regression to the mean – and binning artifacts in general. Our recent retraction of a study reminded me of this issue. Of course, I was generally aware of the concept, as I am sure are most quantitative scientists. But often the underlying issues are somewhat obscure, which is why I certainly didn’t immediately clock on to them in our past work. It took a collaborate group effort with serendipitous suggestions, much thinking and simulating and digging, and not least of all the tireless efforts of my PhD student Susanne Stoll to uncover the full extent of this issue in our published research. We also still maintain that this rabbit hole goes a lot deeper because there are numerous other studies that used similar analyses. They must by necessity contain the same error – hopefully the magnitude of the problem is less severe in most other studies so that their conclusions aren’t all completely spurious. However, we simply cannot know that until somebody investigates this empirically. There are several candidates out there where I think the problem is almost certainly big enough to invalidate the conclusions. I am not the data police and I am not going to run around arguing people’s conclusions are invalid without A) having concrete evidence and B) having talked to the authors personally first.

What I can do, however, is explain how to spot likely candidates of this problem. And you really don’t have far too look. We believe that this issue is ubiquitous in almost all pRF studies; specifically, it affects all pRF studies that use any kind of binning. There are cases where this is probably of no consequence – but people must at least be aware of the issue before it leads to false assumptions and thus erroneous conclusions. We hope to publish another article in the future that lays out this issue in some depth.

But it goes well beyond that. This isn’t a specific problem with pRF studies. Many years before that I had discussions with David Shanks about this subject when he was writing an article (also long since published) of how this artifact confounds many studies in the field of unconscious processing, something that certainly overlaps with my own research. Only last year there was an article arguing that the same artifact explains the Dunning-Kruger effect. And I am starting to see this issue literally everywhere1 now… Just the other day I saw this figure on one of my social media feeds:

This data visualisation makes a striking claim with very clear political implications: High income earners (and presumably very rich people in general) underestimate their wealth relative to society as a whole, while low income earners overestimate theirs. A great number of narratives can be spun about this depending on your own political inclinations. It doesn’t take much imagination to conjure up the ways this could be used to further a political agenda, be it a fierce progressive tax policy or a rabid pulling-yourself-up-by-your-own-bootstraps type of conservatism. I have no interest in getting into this discussion here. What interests me here is whether the claim is actually supported by the evidence.

There are a number of open questions here. I don’t know how “perceived income” is measured exactly2. It could theoretically be possible that some adjustments were made here to control for artifacts. However, taken at face value this looks almost like a textbook example of regression to the mean. Effectively, you have an independent variable, the individuals’ actual income levels. We can presumably regard this as a ground truth – an individual’s income is what it is. We then take a dependent variable, perceived income. It is probably safe to assume that this will correlate with actual income. However, this is not a perfect correlation because perfect correlations are generally meaningless (say correlating body height in inches and centimeters). Obviously, perceived income is a psychological measure that must depend on a whole number of extraneous factors. For one thing, people’s social networks aren’t completely random but we all live embedded in a social context. You will doubtless judge your wealth relative to the people you mostly interact with. Another source of misestimation could be how this perception is measured. I don’t know how that was done here in detail but people were apparently asked to self-rate their assumed income decile. We can expect psychological factors at play that make people unlikely to put themselves in the lowest or highest scores on such a scale. There are many other factors at play but that’s not really important. The point is that we can safely assume that people are relatively bad at judging their true income relative to the whole of society.

But to hell with it, let’s just disregard all that. Instead, let us assume that people are actually perfectly accurate at judging their own income relative to society. Let’s simulate this scenario3. First we draw 10,000 people a Gaussian distribution of actual incomes. This distribution has a mean of $60,000 and a standard deviation of $20,000 – all in fictitious dollars which we assume our fictitious country uses. We assume these are based on people’s paychecks so there is no error4 on this independent variable at all. I use the absolute values to ensure that there is no negative income. The figure below shows the actual objective income for each (simulated) person on the x-axis. The y-axis is just random scatter for visualisation – it has no other significance. The colour code denotes the income bracket (decile) each person belongs to.

Next I simulate perceived income deciles for these fictitious people. To do this we need to do some rescaling to get everyone on the scale 1-10, with 10 being highest top earner. However – and this is important – as per our (certainly false) assumption above, perceived income is perfectly correlated with actual income. It is a simple transformation to rescale it. Now, what happens when you average the perceived income in each of these decile brackets like that graph above did? I do that below, using the same formatting as the original graph:

I will leave it to you, gentle reader, to determine how this compares to the original figure. Why is this happening? It’s simple really when you think about it: Take the highest income bracket. This ranges widely from high-but-reasonable to filthy-more-money-than-you-could-ever-spend-in-a-lifetime rich. This is not a symmetric distribution. The summary statistics of these binned data will be heavily skewed. Its mean/median will be biased downward for the top income brackets and upwards for the low income brackets. Only the income decile near the centre will be approximately symmetric and thus produce an unbiased estimate. Or to put it in simpler terms: the left column simply labels the deciles brackets. The only data here is in the right column and all this plot really shows is that the incomes have a Gaussian-like distribution. This has nothing to do with perceptions of income whatsoever.

In discussions I’ve had this all still confuses some people. So I added another illustration. In the graph below I plot a normal distribution. The coloured bands denote the approximated deciles. The white dots on the X-axis show the mean for each decile. The distance between these dots is obviously not equal. They all trend to be closer to the population mean (zero) than to the middle of their respective bands. This bias is present for all deciles except perhaps the most central ones. However, it is most extreme for the outermost deciles because these have the most asymmetric distributions. This is exactly what the income plots above are showing. It doesn’t matter whether we are looking at actual or perceived income. It doesn’t matter at all if there is error on those measures or not. All that matters is the distribution of the data.

Now, as I already said, I haven’t seen the detailed methodology of that original survey. If the analysis made any attempt to mathematically correct for this problem then I’ll stand corrected5. However, even in that case, the general statistical issue is extremely wide-spread and this serves as a perfect example of how binning can result in widely erroneous conclusions. It also illustrates the importance of this issue. The same problem relates to pRF tuning widths and stimulus preferences and whatnot – but that is frankly of limited importance. But things like these income statistics could have considerable social implications. What this shows to me is two-fold: First, please be careful when you do data analysis. Whenever possible, feed some simulated data to your analysis to see if it behaves as you think it should. Second, binning sucks. I see it effing everywhere now and I feel like I haven’t slept in months6

Superbloodmoon eclipse
Photo by Dave O’Brien, May 2021
  1. A very similar thing happened when I first learned about heteroscedasticity. I kept seeing it in all plots then as well – and I still do…
  2. Many thanks to Susanne Stoll for digging up the source for these data. I didn’t see much in terms of actual methods details here but I also didn’t really look too hard. Via Twitter I also discovered the corresponding Guardian piece which contains the original graph.
  3. Matlab code for this example is available here. I still don’t really do R. Can’t teach an old dog new tricks or whatever…
  4. There may be some error with a self-report measure of people’s actual income although this error is perhaps low – either way we do not need to assume any error here at all.
  5. Somehow I doubt it but I’d be very happy to be wrong.
  6. There could however be other reasons for that…

If this post confused you, there is now a follow-up post to confuse you even more… 🙂

When the hole changes the pigeon

or How innocent assumptions can lead to wrong conclusions

I promised you a (neuro)science post. Don’t let the title mislead you into thinking we’re talking about world affairs and societal ills again. While pigeonholing is directly related to polarised politics or social media, for once this is not what this post is about. Rather, it is about a common error in data analysis. While there have been numerous expositions about similar issues throughout the decades – as we’ve learned the hard way, it is a surprisingly easy mistake to make. A lay summary and some wider musings on the scientific process was published by Benjamin de Haas. A scientific article by Susanne Stoll laying out this problem in more detail is currently available as a preprint.

Pigeonholing (Source: https://commons.wikimedia.org/wiki/File:TooManyPigeons.jpg)

Data binning

In science you often end up with large data sets, with hundreds or thousands of individual observations subject to considerable variance. For instance, in my own field of retinotopic population receptive field (pRF) mapping, a given visual brain area may have a few thousand recording sites, and each has a receptive field position. There are many other scenarios of course. It could be neural firing, or galvanic skin responses, or eye positions recorded at different time points. Or it could be hundreds or thousands of trials in a psychophysics experiment etc. I will talk about pRF mapping because this is where we recently encountered the problem and I am going to describe how it has affected our own findings – however, you may come across the same issue in many guises.

Imagine that we want to test how pRFs move around when you attend to a particular visual field location. I deliberately use this example because it is precisely what a bunch of published pRF studies did, including one of ours. There is some evidence that selective attention shifts the position of neuronal receptive fields, so it is not far-fetched that it might shift pRFs in fMRI experiments also. Our study for instance investigated whether pRFs shift when participants are engaged in a demanding (“high load”) task at fixation, compared to a baseline condition where they only need to detect a simple colour change of the fixation target (“low load”). Indeed, we found that across many visual areas pRFs shifted outwards (i.e. away from fixation). This suggested to us that the retinotopic map reorganises to reflect a kind of tunnel vision when participants are focussed on the central task.

What would be a good way to quantify such map reorganisation? One simple way might be to plot each pRF in the visual field with a vector showing how it is shifted under the attentional manipulation. In the graph below, each dot shows a pRF location under the attentional condition, and the line shows how it has moved away from baseline. Since there is a large number pRFs, many of which are affected by measurement noise or other errors, these plots can be cluttered and confusing:

Plotting shift of each pRF in the attention condition relative to baseline. Each dot shows where a pRF landed under the attentional manipulation, and the line shows how it has shifted away from baseline. This plot is a hellishly confusing mess.

Clearly, we need to do something to tidy up this mess. So we take the data from the baseline condition (in pRF studies, this would normally be attending to a simple colour change at fixation) and divide the visual field up into a number of smaller segments, each of which contains some pRFs. We then calculate the mean position of the pRFs from each segment under the attentional manipulation. Effectively, we summarise the shift from baseline for each segment:

We divide the visual field into segments based on the pRF data from the baseline condition and then plot the mean shift in the experimental condition for each segment. A much clearer graph that suggests some very substantial shifts…

This produces a much clearer plot that suggests some interesting, systematic changes in the visual field representation under attention. Surely, this is compelling evidence that pRFs are affected by this manipulation?

False assumptions

Unfortunately it is not1. The mistake here is to assume that there is no noise in the baseline measure that was used to divide up the data in the first place. If our baseline pRF map were a perfect measure of the visual field representation, then this would have been fine. However, like most data, pRF estimates are variable and subject to many sources of error. The misestimation is also unlikely to be perfectly symmetric – for example, there are several reasons why it is more likely that a pRF will be estimated closer to central vision than in the periphery. This means there could be complex and non-linear error patterns that are very difficult to predict.

The data I showed in these figures are in fact not from an attentional manipulation at all. Rather, they come from a replication experiment where we simply measured a person’s pRF maps twice over the course of several months. One thing we do know is that pRF measurements are quite robust, stable over time, and even similar between scanners with different magnetic field strengths. What this means is that any shifts we found are most likely due to noise. They are completely artifactual.

When you think about it, this error is really quite obvious: sorting observations into clear categories can only be valid if you can be confident in the continuous measure on which you base these categories. Pigeonholing can only work if you can be sure into which hole each pigeon belongs. This error is also hardly new. It has been described in numerous forms as regression to the mean and it rears its ugly head every few years in different fields. It is also related to circular inference, which has already caused a stir in cognitive and social neuroscience a few years ago. Perhaps the reason for this is that it is a damn easy mistake to make – but that doesn’t make the face-palming moment any less frustrating.

It is not difficult to correct this error. In the plot below, I used an independent map from yet another, third pRF mapping session to divide up the visual field. Then I calculated how the pRFs in each visual field segment shifted on average between the two experimental sessions. While some shift vectors remain, they are considerably smaller than in the earlier graph. Again, keep in mind that these are simple replication data and we would not really expect any systematic shifts. There certainly does not seem to be a very obvious pattern here – perhaps there is a bit of a clockwise shift in the right visual hemifield but that breaks down in the left. Either way, this analysis gives us an estimate of how much variability there may be in this measurement.

We use an independent map to divide the visual field into segments. Then we calculate the mean position for each segment in the baseline and the experimental condition, and work out the shift vector between them. For each segment, this plot shows that vector. This plot loses some information, but it shows how much and into which direction pRFs in each segment shifted on average.

This approach of using a third, independent map loses some information because the vectors only tell you the direction and magnitude of the shifts, not exactly where the pRFs started from and where they end up. Often the magnitude and direction of the shift is all we really need to know. However, when the exact position is crucial we could use other approaches. We will explore this in greater depth in upcoming publications.

On the bright side, the example I picked here is probably extreme because I didn’t restrict these plots to a particular region of interest but used all supra-threshold voxels in the occipital cortex. A more restricted analysis would remove some of that noise – but the problem nevertheless remains. How much it skews the findings depends very much on how noisy the data are. Data tend to be less noisy in early visual cortex than in higher-level brain regions, which is where people usually find the most dramatic pRF shifts…

Correcting the literature

It is so easy to make this mistake that you can find it all over the pRF literature. Clearly, neither authors nor reviewers have given it much thought. It is definitely not confined to studies of visual attention, although this is how we stumbled across it. It could be a comparison between different analysis methods or stimulus protocols. It could be studies measuring the plasticity of retinotopic maps after visual field loss. Ironically, it could even be studies that investigate the potential artifacts when mapping such plasticity incorrectly. It is not restricted to the kinds of plots I showed here but should affect any form of binning, including the binning into eccentricity bins that is most common in the literature. We suspect the problem is also pervasive in many other fields or in studies using other techniques. Only a few years ago a similar issue was described by David Shanks in the context of studying unconscious processing. It is also related to warnings you may occasionally hear about using median splits – really just a simpler version of the same approach.

I cannot tell you if the findings from other studies that made this error are spurious. To know that we would need access to the data and reanalyse these studies. Many of them were published before data and code sharing was relatively common2. Moreover, you really need to have a validation dataset, like the replication data in my example figures here. The diversity of analysis pipelines and experimental designs makes this very complex – no two of these studies are alike. The error distributions may also vary between different studies, so ideally we need replication datasets for each study.

In any case, as far as our attentional load study is concerned, after reanalysing these data with unbiased methods, we found little evidence of the effects we published originally. While there is still a hint of pRF shifts, these are no longer statistically significant. As painful as this is, we therefore retracted that finding from the scientific record. There is a great stigma associated with retraction, because of the shady circumstances under which it often happens. But to err is human – and this is part of the scientific method. As I said many times before, science is self-correcting but that is not some magical process. Science doesn’t just happen, it requires actual scientists to do the work. While it can be painful to realise that your interpretation of your data was wrong, this does not diminish the value of this original work3 – if anything this work served an important purpose by revealing the problem to us.

We mostly stumbled across this problem by accident. Susanne Stoll and Elisa Infanti conducted a more complex pRF experiment on attention and found that the purported pRF shifts in all experimental conditions were suspiciously similar (you can see this in an early conference poster here). It took us many months of digging, running endless simulations, complex reanalyses, and sometimes heated arguments before we cracked that particular nut. The problem may seem really obvious now – it sure as hell wasn’t before all that.

This is why this erroneous practice appears to be widespread in this literature and may have skewed the findings of many other published studies. This does not mean that all these findings are false but it should serve as a warning. Ideally, other researchers will also revisit their own findings but whether or not they do so is frankly up to them. Reviewers will hopefully be more aware of the issue in future. People might question the validity of some of these findings in the absence of any reanalysis. But in the end, it doesn’t matter all that much which individual findings hold up and which don’t4.

Check your assumptions

I am personally more interested in taking this whole field forward. This issue is not confined to the scenario I described here. pRF analysis is often quite complex. So are many other studies in cognitive neuroscience and, of course, in many other fields as well. Flexibility in study designs and analysis approaches is not a bad thing – it is in fact essential for addressing scientific questions that we can adapt our experimental designs.

But what this story shows very clearly is the importance of checking our assumptions. This is all the more important when using the complex methods that are ubiquitous in our field. As cognitive neuroscience matures, it is critical that we adopt good practices in ensuring the validity of our methods. In the computational and software development sectors, it is to my knowledge commonplace to test algorithms on conditions where the ground truth is known, such as random and/or simulated data.

This idea is probably not even new to most people and it certainly isn’t to me. During my PhD there was a researcher in the lab who had concocted a pretty complicated analysis of single-cell electrophysiology recordings. It involved lots of summarising and recentering of neuronal tuning functions to produce the final outputs. Neither I nor our supervisor really followed every step of this procedure based only on our colleague’s description – it was just too complex. But eventually we suspected that something might be off and so we fed random numbers to the algorithm – lo and behold the results were a picture perfect reproduction of the purported “experimental” results. Since then, I have simulated the results of my analyses a few other times – for example, when I first started with pRF modelling or when I developed new techniques for measuring psychophysical quantities.

This latest episode taught me that we must do this much more systematically. For any new design, we should conduct control analyses to check how it behaves with data for which the ground truth is known. It can reveal statistical artifacts that might hide inside the algorithm but also help you determine the method’s sensitivity and thus allow you to conduct power calculations. Ideally, we would do that for every new experiment even if it uses a standard design. I realise that this may not always be feasible – but in that case there should be a justification why it is unnecessary.

Because what this really boils down to is simply good science. When you use a method without checking that it works as intended, you are effectively doing a study without a control condition – quite possibly the original sin of science.

Acknowlegdements

In conclusion, I quickly want to thank several people: First of all, Susanne Stoll deserves major credit for tirelessly pursuing this issue in great detail over the past two years with countless reanalyses and simulations. Many of these won’t ever see the light of day but helped us wrap our heads around what is going on here. I want to thank Elisa Infanti for her input and in particular the suggestion of running the analysis on random data – without this we might never have realised how deep this rabbit hole goes. I also want to acknowledge the patience and understanding of our co-authors on the attentional load study, Geraint Rees and Elaine Anderson, for helping us deal with all the stages of grief associated with this. Lastly, I want to thank Benjamin de Haas, the first author of that study for honourably doing the right thing. A lesser man would have simply booked a press conference at Current Biology Total Landscaping instead to say it’s all fake news and announce a legal challenge5.

Footnotes:

  1. The sheer magnitude of some of these shifts may also be scientifically implausible, an issue I’ve repeatedly discussed on this blog already. Similar shifts have however been reported in the literature – another clue that perhaps something is awry in these studies…
  2. Not that data sharing is enormously common even now.
  3. It is also a solid data set with a fairly large number of participants. We’ve based our canonical hemodynamic response function on the data collected for this study – there is no reason to stop using this irrespective of whether the main claims are correct or not.
  4. Although it sure would be nice to know, wouldn’t it?
  5. Did you really think I’d make it through a blog post without making any comment like this?

Implausible hypotheses

A day may come when I will stop talking about conspiracy theories again, but it is not this day. There is probably nothing new about conspiracy theories – they have doubtless been with us since our evolutionary ancestors gained sentience – but I fear that they are a particularly troublesome scourge of our modern society. The global connectivity of the internet and social media enable the spread of this misinformation pandemic in unprecedented ways, just as our physical connectivity facilitate the spread of an actual virus. Also like an actual virus, they can be extremely dangerous and destructive.

But fear not, I will try to move this back to being a blog on neuroscience eventually :P. Today’s post is about some tools we can use to determine the plausibility of a hypothesis. I have written about this before. Science is all about formulating hypotheses and putting them to the test. Not all hypotheses are created equal however – some hypotheses are so obviously true they hardly need testing while others are so implausible that testing them is pointless. Using conspiracy theories as an example, here I will list some tools I use to spot what I consider to be highly implausible hypotheses. I think this is a perfect example, because despite the name conspiracy theories are not actually scientific theories at all – they are in fact conspiracy hypotheses and most are pretty damn implausible.

This is not meant to be an exhaustive list. There may be other things you can think of that help you determine that a claim is implausible, for example Carl Sagan’s chapter on The Fine Art of Baloney Detection. You can also relate much of this back to common logical fallacies. My post merely lists a few basic features that I frequently encounter out there in the wild. Perhaps you’ll find this list useful in your own daily face-palming experiences.

The Bond Villain

Is a central feature to this purported plot a powerful billionaire with infinite funding and unlimited resources and power at their disposal? Do they have a convoluted plan that just smells evil, such as killing off large parts of the world population for the “common good”? You know, like injecting them with vaccines that sterilise them?

The House of Cards

Is the convoluted plan so complicated and carefully crafted numerous steps in advance where each little event has to fall in place just right in order for it to work? You know, like using 5G tech to weaken people’s immune system so it starts a global pandemic with a virus you created in your secret lab so that everyone happily gets injected with your vaccine which will contain nanoscale microchips but not with any other vaccine that others might have developed in the meantime? And obviously you know your vaccine will work against the virus because you could test it thoroughly without anybody else finding out about it?

The Future Tech

Does the plan involve some technology you’ve first heard of on Star Trek or Doctor Who? Is a respiratory illness caused by mobile phone technology? Is someone injecting nanoscale computer chips with a vaccine? Is there brain scanning technology with spatial and temporal resolution that would render all of my research completely obsolete?

The Red Pill

Have you been living a lie all your life? Will embracing the idea mean that you have awoken and/or finally see what’s right in front of you? Are most other people brainwashed sheeple? Did a YouTube video by someone who’ve never heard of finally open your eyes to reality?

The Dull Razorblade

Is the idea built on multiple factors that are not actually necessary to explain the events that unfurled? Was a virus “obviously” created in a lab even though countless viruses occur naturally? Are the odds that what they claim happened more likely than that the same thing happened by chance? Is the most obvious explanation for why the motif of Orion’s Belt appears throughout history and the world because aliens visited from Rigel 7 and not because it’s one of the most recognisable constellations in the night sky?

The World Government

Does it require the deep cooperation of most governments in the world whilst they squabble and vehemently disagree in the public limelight? Is the nefarious scheme perpetrated by the United Nations, which are famous for always agreeing, being efficient, and never having any conflict? (Note that occasionally it may only be the European Union rather than the UN).

The Flawed Explanation

Are the individual hypotheses that form the bigger conspiracy mutually exclusive? Is it based on current geography or environmental conditions even though it happened hundreds, thousands, or millions of years in the past? Does it involve connecting dots on the Mercator world map in straight lines which would actually not be straight on the globe or any other map projection?

The Unlikely Saint

Is the person most criticised, ridiculed, or reviled by the mainstream media in fact the good guy? Imagine, if you will, a world leader who is a former intelligence operative and spy master and who has invaded several sovereign countries. Is he falsely accused of assassinating his enemies and pursuing a cold political plan and actually just a friendly, misunderstood teddy bear? Or perhaps that demagogue, who riles the masses with hateful rhetoric and who has committed acts of corruption in broad daylight, is in fact defending us from evil puppy eating monsters? The CEO of a fossil fuel company in truth protects us from all those environmentalist hippies in centre-right governments who want to poison us with clean air and their utopian idealism of a habitable planet?

The Vast Network

Is everyone in on it? All scientists including all authors, editors and peer reviewers and all the technical support staff and administrators, all influential political leaders and their aides, all medical doctors and nurses and pharmacists, all engineers and all school teachers are involved in this complex scheme to fool the unwashed masses even though there has never been a credible whistleblower? Have they remained silent even though the Moon Landing was hoaxed half a century ago? Do all scientists working on a vaccine for widespread disease actually want to inject you with nanoscale microchips? Is there fortunately a YouTuber whose videos finally lay bare this outrageous, evil scheme?

The Competent Masterminds

Does it assume an immense level of competence and skill on the part of political leaders and organisations to execute their nefarious convoluted plans in the face of clear evidence to the contrary? Are they all just acting like disorganised buffoons to fool us?

The Insincere Questions

Is the framer of the idea “merely asking questions”? Do they simply want you to “think for yourself”? Does thinking for yourself in fact mean agreeing with that person? Do they ask questions about who funded some scientific research without any understanding of how scientific research is actually funded? Are they “not saying it was aliens” but it is obvious that is was in fact aliens?

The Unfalsifiable Claims

Is there no empirical evidence that could prove the claim wrong? Is this argument going in circles or are the goalposts shifted? Is a fact-checking website untrustworthy because it is “obviously part of the conspiracy”, even though you can directly check their source material which is of course also all fabricated? Is the idea based on some claim that has been shown to be a fraud, and the fraudster has been discredited even by his co-authors, but naturally this just part of an even bigger cover-up and a smear campaign? Can only the purveyor of this conspiracy theory be trusted?

The Torrent of Praise

Is the comment section under this YouTube video or Facebook post a long list of people praising and commending the poster for their truth-telling and use of “evidence”? Do most of these commenters have numbers in their name? Do they have profile pictures that look strangely akin to stock photos? Do any of the comments concur with the original post by adding some anecdote that sounds like an episode of the X-Files?

The Puppet Masters

Does it mention the Elders or Zion, the Illuminati, the Knights Templar, or some similar sounding, secret organisation? Or perhaps the Deep State?

The Flat Earth

Does it blatantly deny reality?

I was wrong…

It has been almost a year since I last posted on this blog. I apologise for this hiatus. I’m afraid it’ll continue as it will probably be even longer before my next post. I simply don’t have the time for the blog these days. But in a brief lull in activities I decided to write this well-overdue post. No, this is not yet another neuroscientist wheeling out his Dunning-Krugerism to make a simplistic and probably dead-wrong (no pun intended) model of the CoViD-19 pandemic, and I certainly won’t be talking about what the governments are doing right or wrong in handling this dreadful situation. But the post is at least moderately related to the pandemic and to this very issue of expertise, and more broadly to current world events.

Years ago, I was locked in an extended debate with parapsychology researchers about the evidence for so-called “psi” effects (precognition, telepathy, and the like). What made matters worse, I made the crucial mistake of also engaging in discussion with some of the social media followers of these researchers. I have since gotten a little wiser and learned about the futility and sanity-destroying nature of social media (but not before going through the pain of experiencing the horrors of social media in other contexts, not least of all Brexitrump). I now try my best (but sometimes still fail) to stay away from this shit and all the outrage junkies and drama royalty. Perhaps I just got tired…

Anyway, in the course of this discussion about “psi” research, I uttered following phrase (or at least this is it paraphrased – I’m too lazy to look it up):

To be a scientist, is to be a skeptic.

This statement was based on the notions of scientific scrutiny, objectively weighing evidence for or against a proposition, giving the null hypothesis a chance, and never to take anybody’s word for granted. It was driven by an idealistic and quite possibly naive belief in the scientific method and the excitement about scientific thinking in some popular circles. But I was wrong.

Taken on their own, none of these things are wrong of course. It is true that scientists should challenge dogma and widely-held assumptions. We should be skeptical of scientific claims and the same level of scrutiny should be applied to evidence confirming our predictions as to those that seem to refute them. Arguments from authority are logically fallacious and we shouldn’t just take somebody at their word simply because of their expertise. As fallible human beings we scientists can fool ourselves into believing something that actually isn’t true, regardless of expertise, and perhaps at times expertise can even result in deeply entrenched viewpoints, so it pays to keep an open mind.

But there’s too much of a good thing. Too much skepticism will lead you astray. There is a saying, that has been (mis-)attributed to various people in various forms. I don’t know who first said it and I don’t much care either:

It pays to keep an open mind, but not so open that your brains fall out.

Taken at face value, this may seem out-of-place. Isn’t an open mind the exact opposite of being skeptical? Isn’t the purpose of this quote precisely to tell people not to believe just about any nonsense? Yes and no. If you spend any time reading and listening to conspiracy theories – and I strongly advise you not to – then you’ll find that the admonition to keep an open mind is actually a major hallmark of this misguided and dangerous ideology. I’ve seen memes making the rounds that most people are “sheeple” and only those who have awoken to the truth see the world as it really is, and lots of other such crap. Conspiracy theorists do really keep a very open mind indeed.

A belief in wild-eyed conspiracies goes hand-in-hand with the utmost skepticism of anything that smells even remotely like the status quo or our current knowledge. It involves being open to every explanation out there – except to the one thing that is most likely true. It is the Trust No One philosophy. When I was a teenager, I enjoyed the X-Files. One of the my favourite video games, Deus Ex, was strongly inspired by a whole range of conspiracy theories. It is great entertainment but some people seem to take this message a little too much to heart. If you look into the plot of Deus Ex, you’ll find some haunting parallels to actual world events, from terrorist attacks on New York City to the pandemic we are experiencing now. Ironically, one could even spin conspiracies about the game itself for that reason.

dxcover

Conspiracy theories are very much in fashion right now, probably helped by the fact that there is currently a lunatic in the White House who is actively promoting them. It would be all fun and games, if it were only about UFOs, Ancient Aliens, Flat Earth, or the yeti. Or even about the idea that us dogmatic scientists want to suppress the “truth” that precognition is a thing*. But it isn’t just that.

From the origins of the novel coronavirus disease over vaccinations to climate change, we are constantly bombarded by conspiratorial thinking and its consequences. People apparently set fire to 5G radio masts because of this. Trust in authorities and experts has been eroded all over the globe. The internet seems to facilitate the spread of these ideas so they become far more influential than they would have been in past decades –  sometimes to very damaging effects.

Can we even blame people? It does become increasingly harder to trust anything or anybody. I have seen first-hand how many news media are more interested in publishing articles to make a political point than in providing factual accuracy. This may not even be deliberate; journalists work to tight deadlines and they are a struggling industry trying to keep financially afloat. Revelations about the origins of the Iraq War and scandals of collusion and election meddling, some of which may well be true conspiracies while others may be liberal pipe dreams (and many may fall into a grey area in between), don’t help to restore public trust. And of course public trust in science isn’t helped by the Replication Crisis**.

Science isn’t just about being skeptical

Sure, science is about challenging assumptions but it is also about weighing all available evidence. The challenging of assumptions we see in conspiracies is all too often cherry-picking. Science is also about the principle of parsimony and it requires us to determine the plausibility of claims. Crucially, it is also about acknowledging all the things we don’t know. That last point includes recognising that, you know, perhaps an expert in an area actually does occasionally know more about it than you.

No, you shouldn’t just believe anything someone says merely because they have PhD in the topic. And I honestly don’t know if expertise is really all that crucial in replicating social priming effects – this is for me where the issues with plausibility kick in. But knowing something about a topic gives experts insights that will elude an outsider and it would serve us well to listen to them. They should certainly have to justify and validate their claims – you shouldn’t just take their word as gospel. But don’t delude yourself into thinking you’ve uncovered “the Truth” by disbelieving everybody else. If I’ve learned anything from doing research, it is that the greatest delusion is when you think you’ve actually understood anything.

I have observed a worrying trend among some otherwise rather sensible people to brush aside criticism of conspiracy theories as smugness or over-confidence. This manifests in insinuations like these:

  • Of course, vaccines don’t cause autism, but perhaps this just distracts from the fact that they could be dangerous after all?
  • Of course, 5G doesn’t give people coronavirus but have governments used this pandemic as an opportunity to roll out 5G tech?
  • Of course, the CoViD-19 wasn’t manufactured in a Chinese lab, but researchers from the Wuhan Institute of Virology published studies on such coronaviruses and isn’t it possible that they already had the virus and it escaped the lab due to negligence or was even set loose on purpose?

Conspiracy theories are always dealing in possibilities. Of course, they require ardent believers to promote their tinfoil hat ideas. But they also feed on people like us, people with a somewhat skeptical and inquisitive mind who every so often fall prey to their own cognitive biases. Of course, all of these statements are possible – but that’s not the point. Science is not about what is possible but what is probable. Probabilities change as the evidence accumulates.

How plausible is the claim and even if it is plausible, is it more probable than other explanations or scenarios? Even if there were evidence that companies took advantage of the pandemic to roll out 5G (you know, this thing that has been debated for years and which had been planned ages before anyone even knew what a coronavirus is), wouldn’t it make sense to do this at a time when there is an unprecedented need of a world population in lockdown to have reliable and sophisticated mobile internet? Also, so fucking what? What concrete reason is it why you think 5G is a problem? Or are you just talking about the same itchy feeling people in past ages had about the internet, television, radio, and doubtless at some point also about books?

Let us for a moment ignore the blatant racism and various other factors that make this idea actually quite unlikely and accept the possibility that the coronavirus escaped from a lab in Wuhan. Why shouldn’t there be a lab studying animal-to-human transmission of viruses that have the potential for causing pandemics, especially since we already know this happened with numerous illnesses before and researchers have already warned years ago that such a coronavirus pandemic was coming? Doesn’t it make sense to study this at a place where this is likely to occur? What is more likely, that the thing that we know happens happened or that someone left a jar open by accident and let the virus escape the lab? How do you think the virus got in the lab in the first place? What makes it more likely that it escaped a lab than that it originated on a market where wild exotic animals are being consumed?

There is also an odd irony about some of these ideas. Anti-vaxxers seem somewhat quiet these days now that everybody is clamouring for a vaccine for CoViD-19. Perhaps that’s to be expected. But while there is literally no evidence that widely used vaccines are making you sick (at least beyond that weakened form of creating an immune response that makes you unsusceptible to the actually disease anyway) there are very good reasons to ask whether a new drug or treatment is safe. This is why researchers keep reminding us that a vaccine is still at least a year away and why I find recent suggestions one could become available even this September somewhat concerning. It is certainly great that so much work is put into fighting this pandemic and if human usage can begin soon that is obviously good news – but before we have wide global use perhaps we should ensure that this vaccine is actually safe. The plus side is, in contrast to anti-vaxxers, vaccine scientists are actually concerned about people’s health and well-being.

The real conspiracy

Ask yourself who stands to gain if you believe a claim, whether it is a scientific finding, an official government statement, or a conspiracy. Most conspiracy theories further somebody’s agenda. It could help somebody’s reelection or bring them political influence to erode trust in certain organisations or professions, but it could also be much simpler than that: clickbait makes serious money, and some people actually sow disinformation simply for the fun of it. We can be sure of one real conspiracy: the industry behind conspiracy theories.

 

* Still waiting for my paycheck for being in the pocket of Big Second-Law-of-Thermodynamics…

** This is no reason not to improve the replicability and transparency of scientific research – quite the opposite!

By analogy

In June 2016, the United Kingdom carried out a little study to test the hypothesis that it is the “will of the people” that the country should leave the European Union. The result favoured the Leave hypothesis, albeit with a really small effect size (1.89%). This finding came as a surprise to many but as so often it is the most surprising results that have the most impact.

Accusations of p-hacking soon emerged. Not only was there a clear sampling bias but data thugs suggested that the results might have even been obtained by fraud. Nevertheless, the original publication was never retracted. What’s wrong with inflating the results a bit? Massaging data to fit a theory is not the worst sin! The history of science is rich with errors. Such studies can be of value if they offer new clarity in looking at phenomena.

In fact, the 2016 study did offer a lot of new ways to look at the situation. There was a fair amount of HARKing about what the result of the 2016 study actually meant. Prior to conducting the study, at conferences and in seminars the proponents of the Leave hypotheses kept talking about the UK having a relationship with the EU like Norway and Switzerland. Yet somehow in the eventual publication of the 2016 findings, the authors had changed their tune. Now they professed that their hypothesis was obviously always that the UK should leave the EU without any deal whatsoever.

Sceptics of the Leave hypothesis pointed out various problems with this idea. For one thing, leaving the EU without a deal wasn’t a very plausible hypothesis. There were thousands of little factors to be considered and it seemed unlikely that this was really the will of the people. And of course, the nitpickers also said that “barely more than half” could never be considered the “will of the people”.

Almost immediately, there were calls for a replication to confirm that the “will of the people” really was what believers in the Leave-without-a-deal hypothesis claimed. At first, these voices came only from a ragtag band of second stringers – but as time went on and more and more people realised just how implausible the Leave hypothesis really was, their numbers grew.

Leavers however firmly disagreed. To them, a direct replication was meaningless. That was odd for some of them had openly admitted they wanted to p-hack the hell out of this thing until they got the result they wanted. But now they claimed that there had by now been several conceptual replications of the 2016 results, first in the United States and then later also Brazil, and some might argue even in Italy, Hungary, and Poland. Also in several other European countries similar results were found, albeit not statistically significant. Based on all this evidence, a meta-analysis surely supported the general hypothesis?

But the replicators weren’t dissuaded. The more radical among these methodological terrorists posited that any study in which the experimental design isn’t clearly defined and preregistered prior to data collection is inherently exploratory, and cannot be used to test any hypotheses. They instead called for a preregistered replication, ideally a Registered Report where the methods are peer-reviewed and the manuscript is in principle accepted for publication before data collection even commences. The fact that the 2016 study didn’t do this was just one of its many problems. But people still cite it simply because of its novelty. The replicators also pointed to other research fields, like Switzerland and Ireland, where this approach has long been used very successfully.

As an added twist, it turns out that nobody actually read the background literature. The 2016 study was already a replication attempt of previous findings from 1975. Sure, some people had vaguely heard about this earlier study. Everybody who has ever been to a conference knows that there is always one white-haired emeritus professor in the audience who will shout out “But I already did this four decades ago!”. But nobody really bothered to read this original study until now. It found an enormous result in the opposite direction, 17.23% in favour of remaining in Europe. As some commentators suggested, the population at large may have changed over the past four decades or that there may have been subtle but important differences in the methodology. What if leaving Europe then meant something different to what it means now? But if that were the case, couldn’t leaving Europe in 2016 also have meant something different than in 2019?

But the Leave proponents wouldn’t have any of that. They had already invested too much money and effort and spent all this time giving TED talks about their shiny little theory to give up now. They were in fact desperately afraid of a direct replication because they knew that as with most replications it would probably end in a null result and their beautiful theoretical construct would collapse like a house of cards. Deep inside, most of these people already knew they were chasing a phantom but they couldn’t ever admit it. People like Professor BoJo, Dr Moggy, and Micky “The Class Clown” Gove had built their whole careers on this Leave idea and so they defended the “will of the people” with religious zeal. The last straw they clutched to was to warn that all these failures to replicate would cause irreparable damage to the public’s faith in science.

Only Nigel Farage, unaffiliated garden gnome and self-styled “irreverent citizen scientist”, relented somewhat. Naturally, he claimed he would be doing all that just for science and the pursuit of the truth and that the result of this replication would be even clearer than the 2016 finding. But in truth, he smelled the danger on the wind. He knew that should the Leave hypothesis be finally accepted by consensus, he would be reduced to a complete irrelevance. What was more, he would not get that hefty paycheck.

As of today, the situation remains unresolved. The preregistered replication attempt is still stuck in editorial triage and hasn’t even been sent out for peer review yet. But meanwhile, people in the corridors of power in Westminster and Brussels and Tokyo and wherever else are already basing their decisions on the really weak and poor and quite possibly fraudulent data from the flawed 2016 study. But then, it’s all about the flair, isn’t it?

brexit_demonstration_flags
Shameless little bullies calling for an independent replication outside of the Palace of Westminster (Source: ChiralJon)

Massaging data to fit a theory is antithetical to science

I have stayed out of the Wansink saga for the most part. If you don’t know what this is about, I suggest reading about this case on Retraction Watch. I had a few private conversations about this with Nick Brown, who has been one of the people instrumental in bringing about a whole series of retractions of Wansink’s publications. I have a marginal interest in some of Wansink’s famous research, specifically whether the size of plates can influence how much a person eats, because I have a broader interest in the interplay between perception and behaviour.

But none of that is particularly important. The short story is that considerable irregularities have been discovered in a string of Wansink’s publications, many of which has since been retracted. The whole affair first kicked off with a fundamental own-goal of a blog post (now removed, so posting Gelman’s coverage instead) he wrote in which he essentially seemed to promote p-hacking. Since then the problems that came to light ranged from irregularities (or impossibility) of some of the data he reported, evidence of questionable research practices in terms of cherry-picking or excluding data, to widespread self-plagiarism. Arguably, not all of these issues are equally damning and for some the evidence is more tenuous than for others – but the sheer quantity of problems is egregious. The resulting retractions seem entirely justified.

Today I read an article on Times Higher Education entitled “Massaging data to fit a theory is not the worst research sin” by Martin Cohen, which discusses Wansink’s research sins in a broader context of the philosophy of science. The argument is pretty muddled to me so I am not entirely sure what the author’s point is – but the effective gist seems to shrug off concerns about questionable research practices and that Wansink’s research is still a meaningful contribution to science.  In my mind, Cohen’s article reflects a fundamental misunderstanding of how science works and in places sounds positively post-Truthian. In the following, I will discuss some of the more curious claims made by this article.

“Massaging data to fit a theory is not the worst research sin”

I don’t know about the “worst” sin. I don’t even know if science can have “sins” although this view has been popularised by Chris Chamber’s book and Neuroskeptic’s Circles of Scientific Hell. Note that “inventing data”, a.k.a. going Full-Stapel, is considered the worst affront to the scientific method in the latter worldview. “Massaging data” is perhaps not the same as outright making it up, but on the spectrum of data fabrication it is certainly trending in that direction.

Science is about seeking the truth. In Cohen’s words, “science should above all be about explanation”. It is about finding regularities, relationships, links, and eventually – if we’re lucky – laws of nature that help us make sense of a chaotic, complex world. Altering, cherry-picking, or “shoe-horning” data to fit your favourite interpretation is the exact opposite of that.

Now, the truth is that p-hacking,  the garden of forking paths, flexible outcome-contingent analyses fall under this category. Such QRPs are extremely widespread and to some degree pervade most of the scientific literature. But just because it is common, doesn’t mean that this isn’t bad. Massaging data inevitably produces a scientific literature of skewed results. The only robust way to minimise these biases is through preregistration of experimental designs and confirmatory replications. We are working towards that becoming more commonplace – but in the absence of that it is still possible to do good and honest science.

In contrast, prolifically engaging in such dubious practices, as Wansink appears to have done, fundamentally undermines the validity of scientific research. It is not a minor misdemeanour.

“We forget too easily that the history of science is rich with errors”

I sympathise with the notion that science has always made errors. One of my favourite quotes about the scientific method is that it is about “finding better ways of being wrong.” But we need to be careful not to conflate some very different things here.

First of all, a better way of being wrong is an acknowledgement that science is never a done deal. We don’t just figure out the truth but constantly seek to home in on it. Our hypotheses and theories are constantly refined, hopefully by gradually becoming more correct, but there will also be occasional missteps down a blind alley.

But these “errors” are not at all the same thing as the practices Wansink appears to have engaged in. These were not mere mistakes. While the problems with many QRPs (like optional stopping) have long been underappreciated by many, a lot of the problems in Wansink’s retracted articles are quite deliberate distortions of scientific facts. For most, he could have and should have known better. This isn’t the same as simply getting things wrong.

The examples Cohen offers for the “rich errors” in past research are also not applicable. Miscalculating the age of the earth or presenting an incorrect equation are genuine mistakes. They might be based on incomplete or distorted knowledge. Publishing an incorrect hypothesis (e.g., that DNA is a triple helix) is not the same as mining data to confirm a hypothesis. It is perfectly valid to derive new hypotheses, even if they turn out to be completely false. For example, I might posit that gremlins cause the outdoor socket on my deck to fail. Sooner or later, a thorough empirical investigation will disprove this hypothesis and the evidence will support an alternative, such as that the wiring is faulty. The gremlin hypothesis may be false – and it is also highly implausible – but nothing stops me from formulating it. Wansink’s problem wasn’t with his hypotheses (some of which may indeed turn out to be true) but with the irregularities in the data he used to support them.

“Underlying it all is a suspicion that he was in the habit of forming hypotheses and then searching for data to support them”

Ahm, no. Forming hypotheses before collecting data is how it’s supposed to work. Using Cohen’s “generous perspective”, this is indeed how hypothetico-deductive research works. In how far this relates to Wansink’s “research sin” depends on what exactly is meant here by “searching for data to support” your hypotheses. If this implies you are deliberately looking for data that confirms your prior belief while ignoring or rejecting observations that contradict it, then that is not merely a questionable research practice, but antithetical to the whole scientific endeavour itself. It is also a perfect definition of confirmation bias, something that afflicts all human beings to some extent, scientists included. Scientists must find protections from fooling themselves in this way and that entails constant vigilance and scepticism of our own pet theories. In stark contrast, engaging in this behaviour actively and deliberately is not science but pure story-telling.

The critics are not merely “indulging themselves in a myth of neutral observers uncovering ‘facts'”. Quite to the contrary, I think Wansink’s critics are well aware of the human fallibility of scientists. People are rarely perfectly neutral when it pertains to hypotheses. Even when you are not emotionally invested in which one of multiple explanations for a phenomenon might be correct, they are frequently not equal in terms of how exciting it might be to confirm them. Finding gremlins under my deck would certainly be more interesting (and scary?) than evidence of faulty wiring.

But in the end, facts are facts. There are no “alternative facts”. Results are results. We can differ on how to interpret them but that doesn’t change the underlying data. Of course, some data are plainly wrong because they come from incorrect measurements, artifacts, or statistical flukes. These results are wrong. They aren’t facts even if we think of them as facts at the moment. Sooner or later, they will be refuted. That’s normal. But this is a long shot from deliberately misreporting or distorting facts.

“…studies like Wansink’s can be of value if they offer new clarity in looking at phenomena…”

This seems to be the crux of Cohen’s argument. Somehow, despite all the dubious and possibly fraudulent nature of his research, Wansink still makes a useful contribution to science. How exactly? What “new clarity” do we gain from cherry-picked results?

I can see though that Wansink may “stimulate ideas for future investigations”. There is no denying that he is a charismatic presenter and that some of his ideas were ingenuous. I like the concept of self-filling soup bowls. I do think we must ask some critical questions about this experimental design, such as whether people can be truly unaware that the soup level doesn’t go down as they spoon it up. But the idea is neat and there is certainly scope for future research.

But don’t present this as some kind of virtue. By all means, give credit to him for developing a particular idea or a new experimental method. But please, let’s not pretend that this excuses the dubious and deliberate distortion of the scientific record. It does not justify the amount of money that has quite possibly been wasted on changing how people eat, the advice given to schools based on false research. Deliberately telling untruths is not an error, it is called a lie.

1024px-gremlins_think_it27s_fun_to_hurt_you-_use_care_always-_back_up_our_battleskies5e_-_nara_-_535381

 

Irish Times OpEds are just bloody awful at science (n=1)

TL-DR: No, men are not “better at science” than women.

Clickbaity enough for you? I cannot honestly say I have read a lot of OpEds in the Irish Times so the evidence for my titular claim is admittedly rather limited. But it is still more solidly grounded in actual data than this article published yesterday in the Irish Times. At least I have one data point.

The article in question, a prime example of Betteridge’s Law, is entitled “Are men just better at science than women?“. I don’t need to explain why such a title might be considered sensationalist and controversial. The article itself is an “Opinion” piece, thus allowing the publication to disavow any responsibility for its authorship whilst allowing it to rake in the views from this blatant clickbait. In it, the author discusses some new research reporting gender differences in systemising vs empathising behaviour and puts this in the context of some new government initiative to specifically hire female professors because apparently there is some irony here. He goes on a bit about something called “neurosexism” (is that a real word?) and talks about “hard-wired” brains*.

I cannot quite discern if the author thought he was being funny or if he is simply scientifically illiterate but that doesn’t really matter. I don’t usually spend much time commenting on stuff like that. I have no doubt that the Irish Times, and this author in particular, will be overloaded with outrage and complaints – or, to use the author’s own words, “beaten up” on Twitter. There are many egregious misrepresentations of scientific findings in the mainstream media (and often enough, scientists and/or the university press releases are the source of this). But this example of butchery is just so bad and infuriating in its abuse of scientific evidence that I cannot let it slip past.

The whole argument, if this is what the author attempted, is just riddled with logical fallacies and deliberate exaggerations. I have no time or desire to go through them all. Conveniently, the author already addresses a major point himself by admitting that the study in question does not actually speak to male brains being “hard-wired” for science, but that any gender differences could be arising due to cultural or environmental factors. Not only that, he also acknowledges that the study in question is about autism, not about who makes good professors. So I won’t dwell on these rather obvious points any further. There are much more fundamental problems with the illogical leaps and mental gymnastics in this OpEd:

What makes you “good at science”?

There is a long answer to this question. It most certainly depends somewhat on your field of research and the nature of your work. Some areas require more manual dexterity, whilst others may require programming skills, and others yet call for a talent for high-level maths. As far as we can generalise, in my view necessary traits of a good researcher are: intelligence, creativity, patience, meticulousness, and a dedication to seek the truth rather than confirming theories. That last one probably goes hand-in-hand with some scepticism, including a healthy dose of self-doubt.

There is also a short answer to this question. A good scientist is not measured by their Systemising Quotient (SQ), a self-report measure that quantifies “the drive to analyze or build a rule-based system”. Academia is obsessed with metrics like the h-index (see my previous post) but even pencil pushers and bean counters** in hiring or grant committees haven’t yet proposed to use SQ to evaluate candidates***.

I suspect it is true that many scientists score high on the SQ and also the related Autism-spectrum Quotient (AQ) which, among other things, quantifies a person’s self-reported attention to detail. Anecdotally, I can confirm that a lot of my colleagues score higher than the population average on AQ. More on this in the next section.

However, none of this implies that you need to have a high SQ or AQ to be “good at science”, whatever that means. That assertion is a logical fallacy called affirming the consequent. We may agree that “systemising” characterises a lot of the activities a typical scientist engages in, but there is no evidence that this is sufficient to being a good scientist. It could mean that systemising people are attracted to science and engineering jobs. It certainly does not mean that a non-systemising person cannot be a good scientist.

Small effect sizes

I know I rant a lot about relative effect sizes such as Cohen’s d, where the mean difference is normalised by the variability. I feel that in a lot of research contexts these are given undue weight because the variability itself isn’t sufficiently controlled. But for studies like this we can actually be fairly confident that they are meaningful. The scientific study had a pretty whopping sample size of 671,606 (although that includes all their groups) and also used validation data. The residual physiologist inside me retains his scepticism about self-report questionnaire type measures, but even I have come to admit that a lot of questionnaires can be pretty effective. I think it is safe to say that the Big 5 Personality Factors or the AQ tap into some meaningful real factors. Further, whatever latent variance there may be on these measures, that is probably outweighed by collecting such a massive sample. So the Cohen’s d this study reports is probably quite informative.

What does this say? Well, the difference in SQ between males and females was 0.31. In other words, the distributions of SQ between sexes overlap quite considerably but the distribution for males is somewhat shifted towards higher values. Thus, while the average man has a subtly higher SQ than the average woman, a rather considerable number of women will have higher SQs than the average man. The study helpfully plots these distributions in Figure 1****:

Sex diffs SQ huge N
Distributions of SQ in control females (cyan), control males (magenta), austistic females (red), and autistic males (green).

The relevant curves here are the controls in cyan and magenta. Sorry, colour vision deficient people, the authors clearly don’t care about you (perhaps they are retinasexists?). You’ll notice that the modes of the female and male distributions are really not all that far apart. More noticeable is the skew of all these distributions with a long tail to the right: Low SQs are most common in all groups (including autism) but values across the sample are spread across the full range. So by picking out a random man and a random woman from a crowd, you can be fairly confident that their SQs are both on the lower end but I wouldn’t make any strong guesses about whether the man has a higher SQ than the woman.

However, it gets even tastier because the authors of the study actually also conducted an analysis splitting their data from controls into people in Science, Technology, Engineering, or Maths (STEM) professions compared to controls who were not in STEM. The results (yes, I know the colour code is now weirdly inverted – not how I would have done it…) show that people in STEM, whether male or female, tend to have larger SQs than people outside of STEM. But again, the average difference here is actually small and most of it plays out in the rightward tail of the distributions. The difference between males and females in STEM is also much less distinct than for people outside STEM.

Sex & STEM diffs SQ
Distributions of SQ in STEM females (cyan), STEM males (magenta), control females (red), and control males (green).

So, as already discussed in the previous section, it seems to be the case that people in STEM professions tend to “systemise” a bit more. It also suggests that men systemise more then women but that difference probably decreases for people in STEM. None of this tells us anything about whether people’s brains are “hard-wired” for systemising, if it is about cultural and environmental differences between men and women, or indeed if  being trained in a STEM profession might make people more systemising. It definitely does not tell you who is “good at science”.

What if it were true?

So far so bad for those who might want to make that interpretive leap. But let’s give them the benefit of the doubt and ignore everything I said up until now. What if it were true that systemisers are in fact better scientists? Would that invalidate government or funders initiatives to hire more female scientists? Would that be bad for science?

No. Even if there were a vast difference in systemising between men and women, and between STEM and non-STEM professions, respectively, all such a hiring policy will achieve is to increase the number of good female scientists – exactly what this policy is intended to do. Let me try an analogy.

Basketball players in the NBA tend to be pretty damn tall. Presumably it is easier to dunk when you measure 2 meters than when you’re Tyrion Lannister. Even if all other necessary skills here are equal there is a clear selection pressure for tall people to get into top basketball teams. Now let’s imagine a team decided they want to hire more shorter players. They declare they will hire 10 players who cannot be taller than 1.70m. The team will have try-outs and still seek to get the best players out of their pool of applicants. If they apply an objective criterion for what makes a good player, such as the ability to score consistently, they will only hire short players with excellent aim or who can jump really high. In fact, these shorties will be on average better at aiming and/or jumping than the giants they already have on their team. The team selects for the ability to score. Shorties and Tallies get there via different means but they both get there.

In this analogy, being a top scorer is being a systemiser, which in turn makes you a good scientist. Giants tend to score high because they find it easy to reach the basket. Shorties score high because they have other skills that compensate for their lack of height. Women can be good systemisers despite not being men.

The only scenario in which such a specific hiring policy could be counterproductive is if two conditions are met: 1) The difference between groups in the critical trait (i.e., systemising) is vast and 2) the policy mandates hiring from a particular group without any objective criteria. We have already established that the former condition isn’t fulfilled here – the difference in systemising between men and women is modest at best. The latter condition is really a moot point because this is simply not how hiring works in the real world. Hiring committees don’t usually just offer jobs to the relatively best person out of the pool but also consider the candidates’ objective abilities and achievements. This is even more pertinent here because all candidates in this case will already be eligible for a professorial position anyway. So all that will in fact happen is that we end up with more female professors who will also happen to be high in systemising.

Bad science reporting

Again, this previous section is based on the entirely imaginary and untenable assumption that systemisers are better scientists. I am not aware of any evidence of that – in part because we cannot actually quantify very well what makes a good scientist. The metrics academics actually (and sadly) use for hiring and funding decisions probably do not quantify that either but I am not even aware of any link between systemising and those metrics. Is there a correlation between h-indeces (relative to career age) and SQ? I doubt it.

What we have here is a case of awful science reporting. Bad science journalism and the abuse of scientific data for nefarious political purposes are hardly a new phenomenon – and this won’t just disappear. But the price of freedom (to practice science) is eternal vigilance. I believe as scientists we have a responsibility to debunk such blatant misapprehensions by journalists who I suspect have never even set foot in an actual lab or spoken to any actual scientists.

Some people assert that improving the transparency and reliability of research will hurt the public’s faith in science. Far from it, I believe those things can show people how science really works. The true damage to how the public perceives science is done by garbage articles in the mainstream media like this one – even if it is merely offered as an “opinion”.

1280px-tyson_chandler
By Keith Allison

*) Brains are not actually hard-wired to do anything. Leaving the old Hebbian analogy aside, brains aren’t wired at all, period. They are soft, squishy, wet sponges containing lots of neuronal and glial tissue plus blood vessels. Neurons connect via synapses between axons and dendrites and this connectivity is constantly regulated and new connections grown while others are pruned. This adaptability is one of the main reasons why we even have brains, and lies at the heart of the intelligence, ingenuity, and versatility of our species.

**) I suspect a lot of the pencil pushers and bean counters behind metrics like impact factors or the h-index might well be Systemisers.

***) I hope none of them read this post. We don’t want to give these people any further ideas…

****) Isn’t open access under Creative Commons license great?