When the hole changes the pigeon

or How innocent assumptions can lead to wrong conclusions

I promised you a (neuro)science post. Don’t let the title mislead you into thinking we’re talking about world affairs and societal ills again. While pigeonholing is directly related to polarised politics or social media, for once this is not what this post is about. Rather, it is about a common error in data analysis. While there have been numerous expositions about similar issues throughout the decades – as we’ve learned the hard way, it is a surprisingly easy mistake to make. A scientific article by Susanne Stoll laying out this problem in more detail is currently available as a preprint.

Pigeonholing (Source: https://commons.wikimedia.org/wiki/File:TooManyPigeons.jpg)

Data binning

In science you often end up with large data sets, with hundreds or thousands of individual observations subject to considerable variance. For instance, in my own field of retinotopic population receptive field (pRF) mapping, a given visual brain area may have a few thousand recording sites, and each has a receptive field position. There are many other scenarios of course. It could be neural firing, or galvanic skin responses, or eye positions recorded at different time points. Or it could be hundreds or thousands of trials in a psychophysics experiment etc. I will talk about pRF mapping because this is where we recently encountered the problem and I am going to describe how it has affected our own findings – however, you may come across the same issue in many guises.

Imagine that we want to test how pRFs move around when you attend to a particular visual field location. I deliberately use this example because it is precisely what a bunch of published pRF studies did, including one of ours. There is some evidence that selective attention shifts the position of neuronal receptive fields, so it is not far-fetched that it might shift pRFs in fMRI experiments also. Our study for instance investigated whether pRFs shift when participants are engaged in a demanding (“high load”) task at fixation, compared to a baseline condition where they only need to detect a simple colour change of the fixation target (“low load”). Indeed, we found that across many visual areas pRFs shifted outwards (i.e. away from fixation). This suggested to us that the retinotopic map reorganises to reflect a kind of tunnel vision when participants are focussed on the central task.

What would be a good way to quantify such map reorganisation? One simple way might be to plot each pRF in the visual field with a vector showing how it is shifted under the attentional manipulation. In the graph below, each dot shows a pRF location under the attentional condition, and the line shows how it has moved away from baseline. Since there is a large number pRFs, many of which are affected by measurement noise or other errors, these plots can be cluttered and confusing:

Plotting shift of each pRF in the attention condition relative to baseline. Each dot shows where a pRF landed under the attentional manipulation, and the line shows how it has shifted away from baseline. This plot is a hellishly confusing mess.

Clearly, we need to do something to tidy up this mess. So we take the data from the baseline condition (in pRF studies, this would normally be attending to a simple colour change at fixation) and divide the visual field up into a number of smaller segments, each of which contains some pRFs. We then calculate the mean position of the pRFs from each segment under the attentional manipulation. Effectively, we summarise the shift from baseline for each segment:

We divide the visual field into segments based on the pRF data from the baseline condition and then plot the mean shift in the experimental condition for each segment. A much clearer graph that suggests some very substantial shifts…

This produces a much clearer plot that suggests some interesting, systematic changes in the visual field representation under attention. Surely, this is compelling evidence that pRFs are affected by this manipulation?

False assumptions

Unfortunately it is not1. The mistake here is to assume that there is no noise in the baseline measure that was used to divide up the data in the first place. If our baseline pRF map were a perfect measure of the visual field representation, then this would have been fine. However, like most data, pRF estimates are variable and subject to many sources of error. The misestimation is also unlikely to be perfectly symmetric – for example, there are several reasons why it is more likely that a pRF will be estimated closer to central vision than in the periphery. This means there could be complex and non-linear error patterns that are very difficult to predict.

The data I showed in these figures are in fact not from an attentional manipulation at all. Rather, they come from a replication experiment where we simply measured a person’s pRF maps twice over the course of several months. One thing we do know is that pRF measurements are quite robust, stable over time, and even similar between scanners with different magnetic field strengths. What this means is that any shifts we found are most likely due to noise. They are completely artifactual.

When you think about it, this error is really quite obvious: sorting observations into clear categories can only be valid if you can be confident in the continuous measure on which you base these categories. Pigeonholing can only work if you can be sure into which hole each pigeon belongs. This error is also hardly new. It has been described in numerous forms as regression to the mean and it rears its ugly head every few years in different fields. It is also related to circular inference, which has already caused a stir in cognitive and social neuroscience a few years ago. Perhaps the reason for this is that it is a damn easy mistake to make – but that doesn’t make the face-palming moment any less frustrating.

It is not difficult to correct this error. In the plot below, I used an independent map from yet another, third pRF mapping session to divide up the visual field. Then I calculated how the pRFs in each visual field segment shifted on average between the two experimental sessions. While some shift vectors remain, they are considerably smaller than in the earlier graph. Again, keep in mind that these are simple replication data and we would not really expect any systematic shifts. There certainly does not seem to be a very obvious pattern here – perhaps there is a bit of a clockwise shift in the right visual hemifield but that breaks down in the left. Either way, this analysis gives us an estimate of how much variability there may be in this measurement.

We use an independent map to divide the visual field into segments. Then we calculate the mean position for each segment in the baseline and the experimental condition, and work out the shift vector between them. For each segment, this plot shows that vector. This plot loses some information, but it shows how much and into which direction pRFs in each segment shifted on average.

This approach of using a third, independent map loses some information because the vectors only tell you the direction and magnitude of the shifts, not exactly where the pRFs started from and where they end up. Often the magnitude and direction of the shift is all we really need to know. However, when the exact position is crucial we could use other approaches. We will explore this in greater depth in upcoming publications.

On the bright side, the example I picked here is probably extreme because I didn’t restrict these plots to a particular region of interest but used all supra-threshold voxels in the occipital cortex. A more restricted analysis would remove some of that noise – but the problem nevertheless remains. How much it skews the findings depends very much on how noisy the data are. Data tend to be less noisy in early visual cortex than in higher-level brain regions, which is where people usually find the most dramatic pRF shifts…

Correcting the literature

It is so easy to make this mistake that you can find it all over the pRF literature. Clearly, neither authors nor reviewers have given it much thought. It is definitely not confined to studies of visual attention, although this is how we stumbled across it. It could be a comparison between different analysis methods or stimulus protocols. It could be studies measuring the plasticity of retinotopic maps after visual field loss. Ironically, it could even be studies that investigate the potential artifacts when mapping such plasticity incorrectly. It is not restricted to the kinds of plots I showed here but should affect any form of binning, including the binning into eccentricity bins that is most common in the literature. We suspect the problem is also pervasive in many other fields or in studies using other techniques. Only a few years ago a similar issue was described by David Shanks in the context of studying unconscious processing. It is also related to warnings you may occasionally hear about using median splits – really just a simpler version of the same approach.

I cannot tell you if the findings from other studies that made this error are spurious. To know that we would need access to the data and reanalyse these studies. Many of them were published before data and code sharing was relatively common2. Moreover, you really need to have a validation dataset, like the replication data in my example figures here. The diversity of analysis pipelines and experimental designs makes this very complex – no two of these studies are alike. The error distributions may also vary between different studies, so ideally we need replication datasets for each study.

In any case, as far as our attentional load study is concerned, after reanalysing these data with unbiased methods, we found little evidence of the effects we published originally. While there is still a hint of pRF shifts, these are no longer statistically significant. As painful as this is, we therefore retracted that finding from the scientific record. There is a great stigma associated with retraction, because of the shady circumstances under which it often happens. But to err is human – and this is part of the scientific method. As I said many times before, science is self-correcting but that is not some magical process. Science doesn’t just happen, it requires actual scientists to do the work. While it can be painful to realise that your interpretation of your data was wrong, this does not diminish the value of this original work3 – if anything this work served an important purpose by revealing the problem to us.

We mostly stumbled across this problem by accident. Susanne Stoll and Elisa Infanti conducted a more complex pRF experiment on attention and found that the purported pRF shifts in all experimental conditions were suspiciously similar (you can see this in an early conference poster here). It took us many months of digging, running endless simulations, complex reanalyses, and sometimes heated arguments before we cracked that particular nut. The problem may seem really obvious now – it sure as hell wasn’t before all that.

This is why this erroneous practice appears to be widespread in this literature and may have skewed the findings of many other published studies. This does not mean that all these findings are false but it should serve as a warning. Ideally, other researchers will also revisit their own findings but whether or not they do so is frankly up to them. Reviewers will hopefully be more aware of the issue in future. People might question the validity of some of these findings in the absence of any reanalysis. But in the end, it doesn’t matter all that much which individual findings hold up and which don’t4.

Check your assumptions

I am personally more interested in taking this whole field forward. This issue is not confined to the scenario I described here. pRF analysis is often quite complex. So are many other studies in cognitive neuroscience and, of course, in many other fields as well. Flexibility in study designs and analysis approaches is not a bad thing – it is in fact essential for addressing scientific questions that we can adapt our experimental designs.

But what this story shows very clearly is the importance of checking our assumptions. This is all the more important when using the complex methods that are ubiquitous in our field. As cognitive neuroscience matures, it is critical that we adopt good practices in ensuring the validity of our methods. In the computational and software development sectors, it is to my knowledge commonplace to test algorithms on conditions where the ground truth is known, such as random and/or simulated data.

This idea is probably not even new to most people and it certainly isn’t to me. During my PhD there was a researcher in the lab who had concocted a pretty complicated analysis of single-cell electrophysiology recordings. It involved lots of summarising and recentering of neuronal tuning functions to produce the final outputs. Neither I nor our supervisor really followed every step of this procedure based only on our colleague’s description – it was just too complex. But eventually we suspected that something might be off and so we fed random numbers to the algorithm – lo and behold the results were a picture perfect reproduction of the purported “experimental” results. Since then, I have simulated the results of my analyses a few other times – for example, when I first started with pRF modelling or when I developed new techniques for measuring psychophysical quantities.

This latest episode taught me that we must do this much more systematically. For any new design, we should conduct control analyses to check how it behaves with data for which the ground truth is known. It can reveal statistical artifacts that might hide inside the algorithm but also help you determine the method’s sensitivity and thus allow you to conduct power calculations. Ideally, we would do that for every new experiment even if it uses a standard design. I realise that this may not always be feasible – but in that case there should be a justification why it is unnecessary.

Because what this really boils down to is simply good science. When you use a method without checking that it works as intended, you are effectively doing a study without a control condition – quite possibly the original sin of science.

Acknowlegdements

In conclusion, I quickly want to thank several people: First of all, Susanne Stoll deserves major credit for tirelessly pursuing this issue in great detail over the past two years with countless reanalyses and simulations. Many of these won’t ever see the light of day but helped us wrap our heads around what is going on here. I want to thank Elisa Infanti for her input and in particular the suggestion of running the analysis on random data – without this we might never have realised how deep this rabbit hole goes. I also want to acknowledge the patience and understanding of our co-authors on the attentional load study, Geraint Rees and Elaine Anderson, for helping us deal with all the stages of grief associated with this. Lastly, I want to thank Benjamin de Haas, the first author of that study for honourably doing the right thing. A lesser man would have simply booked a press conference at Current Biology Total Landscaping instead to say it’s all fake news and announce a legal challenge5.

Footnotes:

  1. The sheer magnitude of some of these shifts may also be scientifically implausible, an issue I’ve repeatedly discussed on this blog already. Similar shifts have however been reported in the literature – another clue that perhaps something is awry in these studies…
  2. Not that data sharing is enormously common even now.
  3. It is also a solid data set with a fairly large number of participants. We’ve based our canonical hemodynamic response function on the data collected for this study – there is no reason to stop using this irrespective of whether the main claims are correct or not.
  4. Although it sure would be nice to know, wouldn’t it?
  5. Did you really think I’d make it through a blog post without making any comment like this?

Removing the domain

As you may have noticed, I haven’t been posting very often in the last years – and when I did, they are usually short posts. I simply do not have as much time to devote to this blog as I used to, both due to a massively increased workload and for personal reasons. I am not saying that I will stop writing blog posts altogether – please be assured that I’ll definitely continue this site. As a matter of fact, I have some ideas for future posts and *drum roll* some of them may even be related to *gasp* cognitive neuroscience! :O

However, I simply don’t update this blog often enough to justify the expense of having to pay for the neuroneurotic.net domain. So as of December 2020, this site will no longer have a top level domain (you can still find it at neuroneurotic.wordpress.com). Moreover, the site will have ads again – I hope these won’t be too disruptive to your enjoyment of my wonderfully crafted pieces of internet poetry.

Implausible hypotheses

A day may come when I will stop talking about conspiracy theories again, but it is not this day. There is probably nothing new about conspiracy theories – they have doubtless been with us since our evolutionary ancestors gained sentience – but I fear that they are a particularly troublesome scourge of our modern society. The global connectivity of the internet and social media enable the spread of this misinformation pandemic in unprecedented ways, just as our physical connectivity facilitate the spread of an actual virus. Also like an actual virus, they can be extremely dangerous and destructive.

But fear not, I will try to move this back to being a blog on neuroscience eventually :P. Today’s post is about some tools we can use to determine the plausibility of a hypothesis. I have written about this before. Science is all about formulating hypotheses and putting them to the test. Not all hypotheses are created equal however – some hypotheses are so obviously true they hardly need testing while others are so implausible that testing them is pointless. Using conspiracy theories as an example, here I will list some tools I use to spot what I consider to be highly implausible hypotheses. I think this is a perfect example, because despite the name conspiracy theories are not actually scientific theories at all – they are in fact conspiracy hypotheses and most are pretty damn implausible.

This is not meant to be an exhaustive list. There may be other things you can think of that help you determine that a claim is implausible, for example Carl Sagan’s chapter on The Fine Art of Baloney Detection. You can also relate much of this back to common logical fallacies. My post merely lists a few basic features that I frequently encounter out there in the wild. Perhaps you’ll find this list useful in your own daily face-palming experiences.

The Bond Villain

Is a central feature to this purported plot a powerful billionaire with infinite funding and unlimited resources and power at their disposal? Do they have a convoluted plan that just smells evil, such as killing off large parts of the world population for the “common good”? You know, like injecting them with vaccines that sterilise them?

The House of Cards

Is the convoluted plan so complicated and carefully crafted numerous steps in advance where each little event has to fall in place just right in order for it to work? You know, like using 5G tech to weaken people’s immune system so it starts a global pandemic with a virus you created in your secret lab so that everyone happily gets injected with your vaccine which will contain nanoscale microchips but not with any other vaccine that others might have developed in the meantime? And obviously you know your vaccine will work against the virus because you could test it thoroughly without anybody else finding out about it?

The Future Tech

Does the plan involve some technology you’ve first heard of on Star Trek or Doctor Who? Is a respiratory illness caused by mobile phone technology? Is someone injecting nanoscale computer chips with a vaccine? Is there brain scanning technology with spatial and temporal resolution that would render all of my research completely obsolete?

The Red Pill

Have you been living a lie all your life? Will embracing the idea mean that you have awoken and/or finally see what’s right in front of you? Are most other people brainwashed sheeple? Did a YouTube video by someone who’ve never heard of finally open your eyes to reality?

The Dull Razorblade

Is the idea built on multiple factors that are not actually necessary to explain the events that unfurled? Was a virus “obviously” created in a lab even though countless viruses occur naturally? Are the odds that what they claim happened more likely than that the same thing happened by chance? Is the most obvious explanation for why the motif of Orion’s Belt appears throughout history and the world because aliens visited from Rigel 7 and not because it’s one of the most recognisable constellations in the night sky?

The World Government

Does it require the deep cooperation of most governments in the world whilst they squabble and vehemently disagree in the public limelight? Is the nefarious scheme perpetrated by the United Nations, which are famous for always agreeing, being efficient, and never having any conflict? (Note that occasionally it may only be the European Union rather than the UN).

The Flawed Explanation

Are the individual hypotheses that form the bigger conspiracy mutually exclusive? Is it based on current geography or environmental conditions even though it happened hundreds, thousands, or millions of years in the past? Does it involve connecting dots on the Mercator world map in straight lines which would actually not be straight on the globe or any other map projection?

The Unlikely Saint

Is the person most criticised, ridiculed, or reviled by the mainstream media in fact the good guy? Imagine, if you will, a world leader who is a former intelligence operative and spy master and who has invaded several sovereign countries. Is he falsely accused of assassinating his enemies and pursuing a cold political plan and actually just a friendly, misunderstood teddy bear? Or perhaps that demagogue, who riles the masses with hateful rhetoric and who has committed acts of corruption in broad daylight, is in fact defending us from evil puppy eating monsters? The CEO of a fossil fuel company in truth protects us from all those environmentalist hippies in centre-right governments who want to poison us with clean air and their utopian idealism of a habitable planet?

The Vast Network

Is everyone in on it? All scientists including all authors, editors and peer reviewers and all the technical support staff and administrators, all influential political leaders and their aides, all medical doctors and nurses and pharmacists, all engineers and all school teachers are involved in this complex scheme to fool the unwashed masses even though there has never been a credible whistleblower? Have they remained silent even though the Moon Landing was hoaxed half a century ago? Do all scientists working on a vaccine for widespread disease actually want to inject you with nanoscale microchips? Is there fortunately a YouTuber whose videos finally lay bare this outrageous, evil scheme?

The Competent Masterminds

Does it assume an immense level of competence and skill on the part of political leaders and organisations to execute their nefarious convoluted plans in the face of clear evidence to the contrary? Are they all just acting like disorganised buffoons to fool us?

The Insincere Questions

Is the framer of the idea “merely asking questions”? Do they simply want you to “think for yourself”? Does thinking for yourself in fact mean agreeing with that person? Do they ask questions about who funded some scientific research without any understanding of how scientific research is actually funded? Are they “not saying it was aliens” but it is obvious that is was in fact aliens?

The Unfalsifiable Claims

Is there no empirical evidence that could prove the claim wrong? Is this argument going in circles or are the goalposts shifted? Is a fact-checking website untrustworthy because it is “obviously part of the conspiracy”, even though you can directly check their source material which is of course also all fabricated? Is the idea based on some claim that has been shown to be a fraud, and the fraudster has been discredited even by his co-authors, but naturally this just part of an even bigger cover-up and a smear campaign? Can only the purveyor of this conspiracy theory be trusted?

The Torrent of Praise

Is the comment section under this YouTube video or Facebook post a long list of people praising and commending the poster for their truth-telling and use of “evidence”? Do most of these commenters have numbers in their name? Do they have profile pictures that look strangely akin to stock photos? Do any of the comments concur with the original post by adding some anecdote that sounds like an episode of the X-Files?

The Puppet Masters

Does it mention the Elders or Zion, the Illuminati, the Knights Templar, or some similar sounding, secret organisation? Or perhaps the Deep State?

The Flat Earth

Does it blatantly deny reality?

Dear Co-conspirators

I would like to lodge a complaint. Ever since I was admitted to the cabal over a decade ago I have been waiting for my paycheck – but thus far the immense riches I was promised by the Science Illuminati have yet to materialise. If I had known sooner how much money a CEO in a multinational fossil fuel corporation makes, I’d have pursued that truthtelling career instead of pretending that air pollution is bad for your health and blowing unprecedented amounts of carbon dioxide into the atmosphere could have any consequences on the global climate.

I am also still waiting for the keys to the Ivory Tower. The Lords of Big Vaccine explicitly told us at the induction that these would be forthcoming within days of pledging allegiance to “conventional medicine”. Nobody ever died of the measles, smallpox, or polio. When do I finally get to use the mind-control chips in vaccines? Also, when do I get the antidote to the vaccines I was given before I was anointed as a scientific acolyte?

Now that the roll-out of 5G is well underway, I also hope that you will soon put this to some good use instead of simply causing pandemics with it. Rather we should use it to erase the memories of all those witless fools out there before the secret gets out. I have overheard people suggest they should “follow the money” when looking at scientific research. We really don’t want them to find out how deeply involved funding agencies are in how scientists decide what research they do, and how they have been falling all over themselves just to give us money.

Most importantly, I don’t know why I continue to publish articles in peer-reviewed journals. Why do I keep having these mind-numbing battles with Nitpicker #2? As we all know, this isn’t real “research”. The truths about the universe are best discovered through quick Google searches, our elderly relatives’ Facebook posts, and watching random dudes on YouTube. I understand that we need to keep up the illusion of a body of scientific knowledge and therefore we should publish lots of papers. But surely in that case we should make it easier to do so rather than throwing all those obstacles in our paths, like quibbling about statistics or discussing confounds. Is this why you created all those journals that keep emailing me to publish my eminent work in their inaugural issue?

I’ll be awaiting your reply urgently. If I don’t see all those millions of dollars soon, I might start to think that this conspiracy isn’t working out for me, and I might need to go public with what I know. Don’t think you can silence me by forcing me to wear a face mask!

Hallowed be the Chemtrail,
Sam

Dollarnote_siegel

I was wrong…

It has been almost a year since I last posted on this blog. I apologise for this hiatus. I’m afraid it’ll continue as it will probably be even longer before my next post. I simply don’t have the time for the blog these days. But in a brief lull in activities I decided to write this well-overdue post. No, this is not yet another neuroscientist wheeling out his Dunning-Krugerism to make a simplistic and probably dead-wrong (no pun intended) model of the CoViD-19 pandemic, and I certainly won’t be talking about what the governments are doing right or wrong in handling this dreadful situation. But the post is at least moderately related to the pandemic and to this very issue of expertise, and more broadly to current world events.

Years ago, I was locked in an extended debate with parapsychology researchers about the evidence for so-called “psi” effects (precognition, telepathy, and the like). What made matters worse, I made the crucial mistake of also engaging in discussion with some of the social media followers of these researchers. I have since gotten a little wiser and learned about the futility and sanity-destroying nature of social media (but not before going through the pain of experiencing the horrors of social media in other contexts, not least of all Brexitrump). I now try my best (but sometimes still fail) to stay away from this shit and all the outrage junkies and drama royalty. Perhaps I just got tired…

Anyway, in the course of this discussion about “psi” research, I uttered following phrase (or at least this is it paraphrased – I’m too lazy to look it up):

To be a scientist, is to be a skeptic.

This statement was based on the notions of scientific scrutiny, objectively weighing evidence for or against a proposition, giving the null hypothesis a chance, and never to take anybody’s word for granted. It was driven by an idealistic and quite possibly naive belief in the scientific method and the excitement about scientific thinking in some popular circles. But I was wrong.

Taken on their own, none of these things are wrong of course. It is true that scientists should challenge dogma and widely-held assumptions. We should be skeptical of scientific claims and the same level of scrutiny should be applied to evidence confirming our predictions as to those that seem to refute them. Arguments from authority are logically fallacious and we shouldn’t just take somebody at their word simply because of their expertise. As fallible human beings we scientists can fool ourselves into believing something that actually isn’t true, regardless of expertise, and perhaps at times expertise can even result in deeply entrenched viewpoints, so it pays to keep an open mind.

But there’s too much of a good thing. Too much skepticism will lead you astray. There is a saying, that has been (mis-)attributed to various people in various forms. I don’t know who first said it and I don’t much care either:

It pays to keep an open mind, but not so open that your brains fall out.

Taken at face value, this may seem out-of-place. Isn’t an open mind the exact opposite of being skeptical? Isn’t the purpose of this quote precisely to tell people not to believe just about any nonsense? Yes and no. If you spend any time reading and listening to conspiracy theories – and I strongly advise you not to – then you’ll find that the admonition to keep an open mind is actually a major hallmark of this misguided and dangerous ideology. I’ve seen memes making the rounds that most people are “sheeple” and only those who have awoken to the truth see the world as it really is, and lots of other such crap. Conspiracy theorists do really keep a very open mind indeed.

A belief in wild-eyed conspiracies goes hand-in-hand with the utmost skepticism of anything that smells even remotely like the status quo or our current knowledge. It involves being open to every explanation out there – except to the one thing that is most likely true. It is the Trust No One philosophy. When I was a teenager, I enjoyed the X-Files. One of the my favourite video games, Deus Ex, was strongly inspired by a whole range of conspiracy theories. It is great entertainment but some people seem to take this message a little too much to heart. If you look into the plot of Deus Ex, you’ll find some haunting parallels to actual world events, from terrorist attacks on New York City to the pandemic we are experiencing now. Ironically, one could even spin conspiracies about the game itself for that reason.

dxcover

Conspiracy theories are very much in fashion right now, probably helped by the fact that there is currently a lunatic in the White House who is actively promoting them. It would be all fun and games, if it were only about UFOs, Ancient Aliens, Flat Earth, or the yeti. Or even about the idea that us dogmatic scientists want to suppress the “truth” that precognition is a thing*. But it isn’t just that.

From the origins of the novel coronavirus disease over vaccinations to climate change, we are constantly bombarded by conspiratorial thinking and its consequences. People apparently set fire to 5G radio masts because of this. Trust in authorities and experts has been eroded all over the globe. The internet seems to facilitate the spread of these ideas so they become far more influential than they would have been in past decades –  sometimes to very damaging effects.

Can we even blame people? It does become increasingly harder to trust anything or anybody. I have seen first-hand how many news media are more interested in publishing articles to make a political point than in providing factual accuracy. This may not even be deliberate; journalists work to tight deadlines and they are a struggling industry trying to keep financially afloat. Revelations about the origins of the Iraq War and scandals of collusion and election meddling, some of which may well be true conspiracies while others may be liberal pipe dreams (and many may fall into a grey area in between), don’t help to restore public trust. And of course public trust in science isn’t helped by the Replication Crisis**.

Science isn’t just about being skeptical

Sure, science is about challenging assumptions but it is also about weighing all available evidence. The challenging of assumptions we see in conspiracies is all too often cherry-picking. Science is also about the principle of parsimony and it requires us to determine the plausibility of claims. Crucially, it is also about acknowledging all the things we don’t know. That last point includes recognising that, you know, perhaps an expert in an area actually does occasionally know more about it than you.

No, you shouldn’t just believe anything someone says merely because they have PhD in the topic. And I honestly don’t know if expertise is really all that crucial in replicating social priming effects – this is for me where the issues with plausibility kick in. But knowing something about a topic gives experts insights that will elude an outsider and it would serve us well to listen to them. They should certainly have to justify and validate their claims – you shouldn’t just take their word as gospel. But don’t delude yourself into thinking you’ve uncovered “the Truth” by disbelieving everybody else. If I’ve learned anything from doing research, it is that the greatest delusion is when you think you’ve actually understood anything.

I have observed a worrying trend among some otherwise rather sensible people to brush aside criticism of conspiracy theories as smugness or over-confidence. This manifests in insinuations like these:

  • Of course, vaccines don’t cause autism, but perhaps this just distracts from the fact that they could be dangerous after all?
  • Of course, 5G doesn’t give people coronavirus but have governments used this pandemic as an opportunity to roll out 5G tech?
  • Of course, the CoViD-19 wasn’t manufactured in a Chinese lab, but researchers from the Wuhan Institute of Virology published studies on such coronaviruses and isn’t it possible that they already had the virus and it escaped the lab due to negligence or was even set loose on purpose?

Conspiracy theories are always dealing in possibilities. Of course, they require ardent believers to promote their tinfoil hat ideas. But they also feed on people like us, people with a somewhat skeptical and inquisitive mind who every so often fall prey to their own cognitive biases. Of course, all of these statements are possible – but that’s not the point. Science is not about what is possible but what is probable. Probabilities change as the evidence accumulates.

How plausible is the claim and even if it is plausible, is it more probable than other explanations or scenarios? Even if there were evidence that companies took advantage of the pandemic to roll out 5G (you know, this thing that has been debated for years and which had been planned ages before anyone even knew what a coronavirus is), wouldn’t it make sense to do this at a time when there is an unprecedented need of a world population in lockdown to have reliable and sophisticated mobile internet? Also, so fucking what? What concrete reason is it why you think 5G is a problem? Or are you just talking about the same itchy feeling people in past ages had about the internet, television, radio, and doubtless at some point also about books?

Let us for a moment ignore the blatant racism and various other factors that make this idea actually quite unlikely and accept the possibility that the coronavirus escaped from a lab in Wuhan. Why shouldn’t there be a lab studying animal-to-human transmission of viruses that have the potential for causing pandemics, especially since we already know this happened with numerous illnesses before and researchers have already warned years ago that such a coronavirus pandemic was coming? Doesn’t it make sense to study this at a place where this is likely to occur? What is more likely, that the thing that we know happens happened or that someone left a jar open by accident and let the virus escape the lab? How do you think the virus got in the lab in the first place? What makes it more likely that it escaped a lab than that it originated on a market where wild exotic animals are being consumed?

There is also an odd irony about some of these ideas. Anti-vaxxers seem somewhat quiet these days now that everybody is clamouring for a vaccine for CoViD-19. Perhaps that’s to be expected. But while there is literally no evidence that widely used vaccines are making you sick (at least beyond that weakened form of creating an immune response that makes you unsusceptible to the actually disease anyway) there are very good reasons to ask whether a new drug or treatment is safe. This is why researchers keep reminding us that a vaccine is still at least a year away and why I find recent suggestions one could become available even this September somewhat concerning. It is certainly great that so much work is put into fighting this pandemic and if human usage can begin soon that is obviously good news – but before we have wide global use perhaps we should ensure that this vaccine is actually safe. The plus side is, in contrast to anti-vaxxers, vaccine scientists are actually concerned about people’s health and well-being.

The real conspiracy

Ask yourself who stands to gain if you believe a claim, whether it is a scientific finding, an official government statement, or a conspiracy. Most conspiracy theories further somebody’s agenda. It could help somebody’s reelection or bring them political influence to erode trust in certain organisations or professions, but it could also be much simpler than that: clickbait makes serious money, and some people actually sow disinformation simply for the fun of it. We can be sure of one real conspiracy: the industry behind conspiracy theories.

 

* Still waiting for my paycheck for being in the pocket of Big Second-Law-of-Thermodynamics…

** This is no reason not to improve the replicability and transparency of scientific research – quite the opposite!

Imaging

Inspired by my Twitter feed for the past few weeks, and in particular this tweet in which I suggested science should be about the science not about the scientist, the revelation that there is such a thing as “negotiated submissions“, the debate on whether or not people should sign their peer reviews, and last but not least my inability to spell simple words, I give you my most embarrassing blog post yet. To the tune of a famous John Lennon song:

Imaging there’s no tenure
It’s easy if you try
No self-promotion needed
No reviewer makes you cry

Imaging all researchers
Searching for the truth

Imaging there’s no Twitter
It isn’t hard to do
Nothing to kill or die for
And no arguments, too

Imaging all researchers
Living life in peace

You may say I’m a procrastinator
That I should apply for grants
I should be preparing lectures
And write no stupid rants

Imaging no more authors
Results published as they came
No need for impact factors
Nobody cares about fame

Imaging all researchers
Sharing all data

You may say that I’m a dreamer
But I’m not the only one
I hope someday you’ll join us
And science will again be fun

By analogy

In June 2016, the United Kingdom carried out a little study to test the hypothesis that it is the “will of the people” that the country should leave the European Union. The result favoured the Leave hypothesis, albeit with a really small effect size (1.89%). This finding came as a surprise to many but as so often it is the most surprising results that have the most impact.

Accusations of p-hacking soon emerged. Not only was there a clear sampling bias but data thugs suggested that the results might have even been obtained by fraud. Nevertheless, the original publication was never retracted. What’s wrong with inflating the results a bit? Massaging data to fit a theory is not the worst sin! The history of science is rich with errors. Such studies can be of value if they offer new clarity in looking at phenomena.

In fact, the 2016 study did offer a lot of new ways to look at the situation. There was a fair amount of HARKing about what the result of the 2016 study actually meant. Prior to conducting the study, at conferences and in seminars the proponents of the Leave hypotheses kept talking about the UK having a relationship with the EU like Norway and Switzerland. Yet somehow in the eventual publication of the 2016 findings, the authors had changed their tune. Now they professed that their hypothesis was obviously always that the UK should leave the EU without any deal whatsoever.

Sceptics of the Leave hypothesis pointed out various problems with this idea. For one thing, leaving the EU without a deal wasn’t a very plausible hypothesis. There were thousands of little factors to be considered and it seemed unlikely that this was really the will of the people. And of course, the nitpickers also said that “barely more than half” could never be considered the “will of the people”.

Almost immediately, there were calls for a replication to confirm that the “will of the people” really was what believers in the Leave-without-a-deal hypothesis claimed. At first, these voices came only from a ragtag band of second stringers – but as time went on and more and more people realised just how implausible the Leave hypothesis really was, their numbers grew.

Leavers however firmly disagreed. To them, a direct replication was meaningless. That was odd for some of them had openly admitted they wanted to p-hack the hell out of this thing until they got the result they wanted. But now they claimed that there had by now been several conceptual replications of the 2016 results, first in the United States and then later also Brazil, and some might argue even in Italy, Hungary, and Poland. Also in several other European countries similar results were found, albeit not statistically significant. Based on all this evidence, a meta-analysis surely supported the general hypothesis?

But the replicators weren’t dissuaded. The more radical among these methodological terrorists posited that any study in which the experimental design isn’t clearly defined and preregistered prior to data collection is inherently exploratory, and cannot be used to test any hypotheses. They instead called for a preregistered replication, ideally a Registered Report where the methods are peer-reviewed and the manuscript is in principle accepted for publication before data collection even commences. The fact that the 2016 study didn’t do this was just one of its many problems. But people still cite it simply because of its novelty. The replicators also pointed to other research fields, like Switzerland and Ireland, where this approach has long been used very successfully.

As an added twist, it turns out that nobody actually read the background literature. The 2016 study was already a replication attempt of previous findings from 1975. Sure, some people had vaguely heard about this earlier study. Everybody who has ever been to a conference knows that there is always one white-haired emeritus professor in the audience who will shout out “But I already did this four decades ago!”. But nobody really bothered to read this original study until now. It found an enormous result in the opposite direction, 17.23% in favour of remaining in Europe. As some commentators suggested, the population at large may have changed over the past four decades or that there may have been subtle but important differences in the methodology. What if leaving Europe then meant something different to what it means now? But if that were the case, couldn’t leaving Europe in 2016 also have meant something different than in 2019?

But the Leave proponents wouldn’t have any of that. They had already invested too much money and effort and spent all this time giving TED talks about their shiny little theory to give up now. They were in fact desperately afraid of a direct replication because they knew that as with most replications it would probably end in a null result and their beautiful theoretical construct would collapse like a house of cards. Deep inside, most of these people already knew they were chasing a phantom but they couldn’t ever admit it. People like Professor BoJo, Dr Moggy, and Micky “The Class Clown” Gove had built their whole careers on this Leave idea and so they defended the “will of the people” with religious zeal. The last straw they clutched to was to warn that all these failures to replicate would cause irreparable damage to the public’s faith in science.

Only Nigel Farage, unaffiliated garden gnome and self-styled “irreverent citizen scientist”, relented somewhat. Naturally, he claimed he would be doing all that just for science and the pursuit of the truth and that the result of this replication would be even clearer than the 2016 finding. But in truth, he smelled the danger on the wind. He knew that should the Leave hypothesis be finally accepted by consensus, he would be reduced to a complete irrelevance. What was more, he would not get that hefty paycheck.

As of today, the situation remains unresolved. The preregistered replication attempt is still stuck in editorial triage and hasn’t even been sent out for peer review yet. But meanwhile, people in the corridors of power in Westminster and Brussels and Tokyo and wherever else are already basing their decisions on the really weak and poor and quite possibly fraudulent data from the flawed 2016 study. But then, it’s all about the flair, isn’t it?

brexit_demonstration_flags
Shameless little bullies calling for an independent replication outside of the Palace of Westminster (Source: ChiralJon)

What is a “publication”?

I was originally thinking of writing a long blog post discussing this but it is hard to type verbose treatises like that from inside my gently swaying hammock. So y’all will be much relieved to hear that I spare you that post. Instead I’ll just post the results of the recent Twitter poll I ran, which is obviously enormously representative of the 336 people who voted. Whatever we’re going to make of this, I think it is obvious that there remains great scepticism about treating preprints the same as publications. I am also puzzled by what the hell is wrong with the 8% who voted for the third option*. Do these people not put In press articles on their CVs?

screenshot 2019-01-19 at 16.53.25

*) Weirdly, I could have sworn that when the poll originally closed this percentage was 9%. Somehow it was corrected downwards afterward. Should this be possible?

On optimal measures of neural similarity

Disclaimer: This is a follow-up to my previous post about the discussion between Niko Kriegeskorte and Brad Love. Here are my scientific views on the preprint by Bobadilla-Suarez, Ahlheim, Mehrotra, Panos, & Love and some of the issues raised by Kriegeskorte in his review/blog post. This is not a review and therefore not as complete as a review would be, and it contains some additional explanations and non-scientific points. Given my affiliation with Bobadilla-Suarez’s department, a formal review for a journal would constitute a conflict of interest anyway.

What’s the point of all this?

I was first attracted to Niko’s post because just the other day my PhD student and I discussed the possibility of running a new study using Representational Similarity Analysis (RSA). Given the title of his post, I jokingly asked him what was the TL;DR answer to the question “What’s the best measure of representational dissimilarity?”. At the time, I had no idea that this big controversy was brewing… I have used multivoxel pattern analyses in the past and am reasonably familiar with RSA but I have never used it in published work (although I am currently preparing a manuscript that contains one such analysis). The answer to this question is therefore pretty relevant to me.

RSA is a way to quantify the similarity of patterns of brain responses (usually measured as voxel response patterns with fMRI or the firing rates of a set of neurons etc) to a range of different stimuli. This produces a (dis)similarity matrix where each pairwise comparison is a cell that denotes how similar/confusable the response patterns to those stimuli are. In turn, the pattern of these similarities (the “representational similarity”) then allows researchers to draw inferences about how particular stimuli (or stimulus dimensions) are encoded in the brain. Here is an illustration:

rsa

The person called Warshort believes journal reviews, preprint comments, and blog posts to be more or less the same thing, public commentaries on published research. The logic of RSA is that somewhere in their brain the pattern of neural activity evoked by these three concepts is similar. Contrast this to person Liebe who regards reviews and preprint comments to be similar (but not as similar as Warshort would) but who considers personal blog posts to be diametrically opposed to reviews.

What is the research question?

According to their introduction, Bobadilla-Suarez et al. set themselves the following goals:

“The first goal was to ascertain whether the similarity measures used by the brain differ across regions. The second goal was to investigate whether the preferred measures differ across tasks and stimulus conditions. Our broader aim was to elucidate the nature of neural similarity.”

In some sense, it is one of the overarching goals of cognitive neuroscience to answer that final question, so they certainly have their work cut out for them. But looking at this more specifically, the question of the best measure of comparing brain states across conditions and how this depends on where and what is being compared is an important one for the field.

Unfortunately, to me this question seems ill-posed in the context of this study. If the goal is to understand what similarity measures are “used by the brain” we immediately need to ask ourselves  whether the techniques used to address this question are appropriate to answer it. This is largely a conceptual point, and the study’s first caveat for me. We could instead reinterpret this into a technical comparison of different methods, but therein lies another caveat and this seems to be the main concern Kriegeskorte raised in his review. I’ll elaborate on both these points in turn:

The conceptual issue

I am sure the authors are fully aware of the limitations of making inferences about neural representations from brain imaging data. Any such inferences can only be as good as the method for measuring brain responses. Most studies using RSA are based on fMRI data which measures a metabolic proxy of neuronal activity. While fMRI experiments have doubtless made important discoveries about how the brain is organised and functions, this is a caveat we need to take seriously: there may very well be information in brain activity that is not directly reflected in fMRI measures. It is almost certainly not the case that brain regions communicate with one another directly via reading out their respective metabolic activity patterns.

This issue is further complicated by the fact that RSA studies using fMRI are based on voxel activity patterns. Voxels are individual elements in a brain image, the equivalent to pixels in a digital image. How a brain scan is subdivided into voxels is completely arbitrary and depends on a lot of methodological choices and parameters. The logic of using voxel patterns for RSA is that individual voxels will usually exhibit biased responses depending on the stimulus – however, the nature of these response biases remains highly controversial and also quite likely depends on what brain states (visual stimuli, complex tasks, memories, etc) are being compared. Critically, voxel patterns cannot possibly be directly relevant to neural encoding. At best, they are indirectly correlated with the underlying patterns, and naturally, the voxel resolution may also matter. In theory, two stimuli could be encoded by completely non-overlapping and unconnected neuronal populations which are nevertheless mixed into the same voxels. Even if voxel responses were a direct measure of neuronal activity, they might not show any biased responses at all, and the voxel response pattern would therefore carry no information about the stimuli whatsoever.

But there is an even more fundamental issue here. This is also unaffected by what actual brain measure is used, be it voxel patterns or the firing rates of actual neurons. The authors’ stated goal is to reveal what measure the brain itself uses to establish the similarity of brain states. The measures they compare are statistical methods, e.g. the Pearson correlation coefficient or the Mahalanobis distance between two response patterns. But the brain is no statistician. At most, a statistical quantity like a Pearson’s r might be a good description for what some read-out neurons somewhere in the processing hierarchy do to categorise the response patterns in up-stream regions. This may sound like an unnecessarily pedantic semantic distinction, but I’d disagree: by only testing predefined statistical models of how pattern similarity could be quantified, we may impose an artificially biased set of models. The actual way this is implemented in neuronal circuits may very well be a hybrid or a completely different process altogether. Neural similarity might linearly correlate with Pearson’s r over some range, say between r=0.5-1, but then be more consistent with a magnitude code at the lower end of similarities. It might also come with built in thresholding or rectifying mechanisms in which patterns below a certain criterion are automatically encoded as dissimilar. Of course, you have to start somewhere and the models the authors used are reasonable choices. However, this description should be more circumspect in my view because in the best case we could really say that the results suggest a mechanism that is well described by a given statistical model.

Finally, the authors seem to make an implicit assumption that does not necessarily hold: there is actually no reason to accept up-front that the brain quantifies pattern similarity at all. I assume that it does, and it is certainly an important assumption to be tested empirically. But in theory it seems entirely possible that spatial patterns of neural activity in a particular brain region are an epi-phenomenon of how neurons in that region are organised. This does however not mean that downstream neurons necessarily use this pattern information. I’d wager this almost certainly also depends on the stimulus/task. For instance, a higher-level neuron whose job it is to determine whether a stimulus appeared on the left or the right presumably uses the spatial pattern of retinotopically-organised responses in the earlier visual regions. For other, more complex stimulus dimensions, this may not be the case.

The technical issue

This brings me to the other caveat I see with Bobadilla-Suarez et al.’s approach here. As I said, this is largely the same point made by Kriegeskorte in his review and since this takes up most of his post I’ll keep it brief. If we brush aside the conceptual points I made above and instead assume that the brain indeed determines the similarity of response patterns in up-stream areas, what is the best way to test how it does this? The authors used a machine learning classifier to use pair-wise decoding of different stimuli and construct a confusability matrix. Conceptually, this is pretty much the same as the similarity matrix derived from the other measures they are testing (e.g. Pearson’s r) but it instead uses a classifier algorithm the determine the discriminability of the response patterns. The authors then compare these decoding matrices with those based on the similarity measures they tested.

As Kriegeskorte suggests, these decoding methods are just another method of determining neural similarity. Different kinds of decoders are also closely related to the various methods Bobadilla-Suarez et al. compared: the Mahalanobis distance isn’t conceptually very far from a linear discriminant decoder, and you can actually build a classifier using Pearson’s r (in fact, this is the classifier I mostly used in my own studies).

The premise of Bobadilla-Suarez et al.’s study therefore seems circular. They treat decodability of neural activity patterns as the ground truth of neural similarity, and that assumption seems untenable to me. They discuss the confound that the choice of decoding algorithm would affect the results and therefore advocate using the best available algorithm, yet this doesn’t really address the underlying issue. The decoder establishes the statistical similarity between neural response patterns. It does not quantify the actual neural similarity code – as a matter of fact, it cannot possibly do so.

It is therefore also unsurprising if the similarity measure that best matches classifier performance is the method that is closest to what the given classifier algorithm is based on. I may have missed this, but I cannot discern from the manuscript which classifier was actually used for the final analyses, only that the best of three was chosen. The best classifier was determined separately for the two datasets the authors used, which could be one explanation for why their outcome results differ between them.

Summary

Bobadilla-Suarez et al. ask an interesting and important question but I don’t think the study as it is can actually address it. There is a conceptual issue in that the brain may not necessarily use any of the available statistical models to quantify neural similarity, and in fact it may not do so at all. Of course, it is perfectly valid to compare different models of how it achieves this feat and any answer to this question need not be final. It does however seem to me that this is more of a methodological comparison rather than an attempt to establish what the brain is actually doing.

To my understanding, the approach the authors used to establish which similarity measure is best cannot answer this question. In this I appear to concur with Kriegeskorte’s review. Perhaps I am wrong of course, as the authors have previously suggested that Kriegeskorte “missed the point”, in which case I would welcome further explanation of the authors’ rationale here. However, from where I’m currently standing, I would recommend that the authors revise their manuscript as a methodological comparison and to be more circumspect with regard to claims about neural representations.

The results shown here are certainly not without merit. By comparing commonly used similarity measures to the best available decoding algorithm they may not establish which measure is closest to what the brain is doing, but they certainly do show how these measures compare to complex classification algorithms. This in itself is informative for practical reasons because decoding is computationally expensive. Any squabbling aside, the authors show that the most commonly used measure, Pearson’s correlation, clearly does not perform in the same way as a lot of other possible techniques. This finding should also be of interest to anyone conducting an RSA experiment.

Some final words

I hope the authors find this comment useful. Just because I agree with Kriegeskorte’s main point, I hope that doesn’t make me his “acolyte” (I have neither been trained by him nor would I say that we stem from the same theoretical camp). I may have “missed the point” too, in which case I would appreciate further insight.

I find it very unfortunate that instead of a decent discussion on science, this debate descended into something not far above a poo-slinging contest. I have deliberately avoided taking sides in that argument because of my relationship to either side. While I vehemently object to the manner with which Brad responded to Niko’s post, I think it should be obvious that not everybody is on the same wavelength when it comes to open reviewing. It is depressing and deeply unsettling how many people on either side of this divide appear to be unwilling to even try to understand the other point of view.

Turning off comments

I have decided to turn off the comment functionality on this blog. I used to believe strongly that this would be the best place for any discussion to take place but this is clearly utopian. Most discussion about blog posts inevitably occurs on social media like Twitter and Facebook. At my advanced age I find it increasingly harder to keep track off all these multiple parallel streams and I predict soon I’ll find it even harder. Most of the comments here were often rehashing discussions I already had elsewhere as well while some of them were completely pointless. There was also the occasional joker who just took a dump on my lawn but didn’t bother to stick around for a chat. So now I am consolidating my resources. If you have a comment ping me a reply on Twitter (I always tweet out the link to a new post), respond via another blog post, or if you prefer a private conversation you can always email me.