Imaging

Inspired by my Twitter feed for the past few weeks, and in particular this tweet in which I suggested science should be about the science not about the scientist, the revelation that there is such a thing as “negotiated submissions“, the debate on whether or not people should sign their peer reviews, and last but not least my inability to spell simple words, I give you my most embarrassing blog post yet. To the tune of a famous John Lennon song:

Imaging there’s no tenure
It’s easy if you try
No self-promotion needed
No reviewer makes you cry

Imaging all researchers
Searching for the truth

Imaging there’s no Twitter
It isn’t hard to do
Nothing to kill or die for
And no arguments, too

Imaging all researchers
Living life in peace

You may say I’m a procrastinator
That I should apply for grants
I should be preparing lectures
And write no stupid rants

Imaging no more authors
Results published as they came
No need for impact factors
Nobody cares about fame

Imaging all researchers
Sharing all data

You may say that I’m a dreamer
But I’m not the only one
I hope someday you’ll join us
And science will again be fun

By analogy

In June 2016, the United Kingdom carried out a little study to test the hypothesis that it is the “will of the people” that the country should leave the European Union. The result favoured the Leave hypothesis, albeit with a really small effect size (1.89%). This finding came as a surprise to many but as so often it is the most surprising results that have the most impact.

Accusations of p-hacking soon emerged. Not only was there a clear sampling bias but data thugs suggested that the results might have even been obtained by fraud. Nevertheless, the original publication was never retracted. What’s wrong with inflating the results a bit? Massaging data to fit a theory is not the worst sin! The history of science is rich with errors. Such studies can be of value if they offer new clarity in looking at phenomena.

In fact, the 2016 study did offer a lot of new ways to look at the situation. There was a fair amount of HARKing about what the result of the 2016 study actually meant. Prior to conducting the study, at conferences and in seminars the proponents of the Leave hypotheses kept talking about the UK having a relationship with the EU like Norway and Switzerland. Yet somehow in the eventual publication of the 2016 findings, the authors had changed their tune. Now they professed that their hypothesis was obviously always that the UK should leave the EU without any deal whatsoever.

Sceptics of the Leave hypothesis pointed out various problems with this idea. For one thing, leaving the EU without a deal wasn’t a very plausible hypothesis. There were thousands of little factors to be considered and it seemed unlikely that this was really the will of the people. And of course, the nitpickers also said that “barely more than half” could never be considered the “will of the people”.

Almost immediately, there were calls for a replication to confirm that the “will of the people” really was what believers in the Leave-without-a-deal hypothesis claimed. At first, these voices came only from a ragtag band of second stringers – but as time went on and more and more people realised just how implausible the Leave hypothesis really was, their numbers grew.

Leavers however firmly disagreed. To them, a direct replication was meaningless. That was odd for some of them had openly admitted they wanted to p-hack the hell out of this thing until they got the result they wanted. But now they claimed that there had by now been several conceptual replications of the 2016 results, first in the United States and then later also Brazil, and some might argue even in Italy, Hungary, and Poland. Also in several other European countries similar results were found, albeit not statistically significant. Based on all this evidence, a meta-analysis surely supported the general hypothesis?

But the replicators weren’t dissuaded. The more radical among these methodological terrorists posited that any study in which the experimental design isn’t clearly defined and preregistered prior to data collection is inherently exploratory, and cannot be used to test any hypotheses. They instead called for a preregistered replication, ideally a Registered Report where the methods are peer-reviewed and the manuscript is in principle accepted for publication before data collection even commences. The fact that the 2016 study didn’t do this was just one of its many problems. But people still cite it simply because of its novelty. The replicators also pointed to other research fields, like Switzerland and Ireland, where this approach has long been used very successfully.

As an added twist, it turns out that nobody actually read the background literature. The 2016 study was already a replication attempt of previous findings from 1975. Sure, some people had vaguely heard about this earlier study. Everybody who has ever been to a conference knows that there is always one white-haired emeritus professor in the audience who will shout out “But I already did this four decades ago!”. But nobody really bothered to read this original study until now. It found an enormous result in the opposite direction, 17.23% in favour of remaining in Europe. As some commentators suggested, the population at large may have changed over the past four decades or that there may have been subtle but important differences in the methodology. What if leaving Europe then meant something different to what it means now? But if that were the case, couldn’t leaving Europe in 2016 also have meant something different than in 2019?

But the Leave proponents wouldn’t have any of that. They had already invested too much money and effort and spent all this time giving TED talks about their shiny little theory to give up now. They were in fact desperately afraid of a direct replication because they knew that as with most replications it would probably end in a null result and their beautiful theoretical construct would collapse like a house of cards. Deep inside, most of these people already knew they were chasing a phantom but they couldn’t ever admit it. People like Professor BoJo, Dr Moggy, and Micky “The Class Clown” Gove had built their whole careers on this Leave idea and so they defended the “will of the people” with religious zeal. The last straw they clutched to was to warn that all these failures to replicate would cause irreparable damage to the public’s faith in science.

Only Nigel Farage, unaffiliated garden gnome and self-styled “irreverent citizen scientist”, relented somewhat. Naturally, he claimed he would be doing all that just for science and the pursuit of the truth and that the result of this replication would be even clearer than the 2016 finding. But in truth, he smelled the danger on the wind. He knew that should the Leave hypothesis be finally accepted by consensus, he would be reduced to a complete irrelevance. What was more, he would not get that hefty paycheck.

As of today, the situation remains unresolved. The preregistered replication attempt is still stuck in editorial triage and hasn’t even been sent out for peer review yet. But meanwhile, people in the corridors of power in Westminster and Brussels and Tokyo and wherever else are already basing their decisions on the really weak and poor and quite possibly fraudulent data from the flawed 2016 study. But then, it’s all about the flair, isn’t it?

brexit_demonstration_flags
Shameless little bullies calling for an independent replication outside of the Palace of Westminster (Source: ChiralJon)

What is a “publication”?

I was originally thinking of writing a long blog post discussing this but it is hard to type verbose treatises like that from inside my gently swaying hammock. So y’all will be much relieved to hear that I spare you that post. Instead I’ll just post the results of the recent Twitter poll I ran, which is obviously enormously representative of the 336 people who voted. Whatever we’re going to make of this, I think it is obvious that there remains great scepticism about treating preprints the same as publications. I am also puzzled by what the hell is wrong with the 8% who voted for the third option*. Do these people not put In press articles on their CVs?

screenshot 2019-01-19 at 16.53.25

*) Weirdly, I could have sworn that when the poll originally closed this percentage was 9%. Somehow it was corrected downwards afterward. Should this be possible?

On optimal measures of neural similarity

Disclaimer: This is a follow-up to my previous post about the discussion between Niko Kriegeskorte and Brad Love. Here are my scientific views on the preprint by Bobadilla-Suarez, Ahlheim, Mehrotra, Panos, & Love and some of the issues raised by Kriegeskorte in his review/blog post. This is not a review and therefore not as complete as a review would be, and it contains some additional explanations and non-scientific points. Given my affiliation with Bobadilla-Suarez’s department, a formal review for a journal would constitute a conflict of interest anyway.

What’s the point of all this?

I was first attracted to Niko’s post because just the other day my PhD student and I discussed the possibility of running a new study using Representational Similarity Analysis (RSA). Given the title of his post, I jokingly asked him what was the TL;DR answer to the question “What’s the best measure of representational dissimilarity?”. At the time, I had no idea that this big controversy was brewing… I have used multivoxel pattern analyses in the past and am reasonably familiar with RSA but I have never used it in published work (although I am currently preparing a manuscript that contains one such analysis). The answer to this question is therefore pretty relevant to me.

RSA is a way to quantify the similarity of patterns of brain responses (usually measured as voxel response patterns with fMRI or the firing rates of a set of neurons etc) to a range of different stimuli. This produces a (dis)similarity matrix where each pairwise comparison is a cell that denotes how similar/confusable the response patterns to those stimuli are. In turn, the pattern of these similarities (the “representational similarity”) then allows researchers to draw inferences about how particular stimuli (or stimulus dimensions) are encoded in the brain. Here is an illustration:

rsa

The person called Warshort believes journal reviews, preprint comments, and blog posts to be more or less the same thing, public commentaries on published research. The logic of RSA is that somewhere in their brain the pattern of neural activity evoked by these three concepts is similar. Contrast this to person Liebe who regards reviews and preprint comments to be similar (but not as similar as Warshort would) but who considers personal blog posts to be diametrically opposed to reviews.

What is the research question?

According to their introduction, Bobadilla-Suarez et al. set themselves the following goals:

“The first goal was to ascertain whether the similarity measures used by the brain differ across regions. The second goal was to investigate whether the preferred measures differ across tasks and stimulus conditions. Our broader aim was to elucidate the nature of neural similarity.”

In some sense, it is one of the overarching goals of cognitive neuroscience to answer that final question, so they certainly have their work cut out for them. But looking at this more specifically, the question of the best measure of comparing brain states across conditions and how this depends on where and what is being compared is an important one for the field.

Unfortunately, to me this question seems ill-posed in the context of this study. If the goal is to understand what similarity measures are “used by the brain” we immediately need to ask ourselvesĀ  whether the techniques used to address this question are appropriate to answer it. This is largely a conceptual point, and the study’s first caveat for me. We could instead reinterpret this into a technical comparison of different methods, but therein lies another caveat and this seems to be the main concern Kriegeskorte raised in his review. I’ll elaborate on both these points in turn:

The conceptual issue

I am sure the authors are fully aware of the limitations of making inferences about neural representations from brain imaging data. Any such inferences can only be as good as the method for measuring brain responses. Most studies using RSA are based on fMRI data which measures a metabolic proxy of neuronal activity. While fMRI experiments have doubtless made important discoveries about how the brain is organised and functions, this is a caveat we need to take seriously: there may very well be information in brain activity that is not directly reflected in fMRI measures. It is almost certainly not the case that brain regions communicate with one another directly via reading out their respective metabolic activity patterns.

This issue is further complicated by the fact that RSA studies using fMRI are based on voxel activity patterns. Voxels are individual elements in a brain image, the equivalent to pixels in a digital image. How a brain scan is subdivided into voxels is completely arbitrary and depends on a lot of methodological choices and parameters. The logic of using voxel patterns for RSA is that individual voxels will usually exhibit biased responses depending on the stimulus – however, the nature of these response biases remains highly controversial and also quite likely depends on what brain states (visual stimuli, complex tasks, memories, etc) are being compared. Critically, voxel patterns cannot possibly be directly relevant to neural encoding. At best, they are indirectly correlated with the underlying patterns, and naturally, the voxel resolution may also matter. In theory, two stimuli could be encoded by completely non-overlapping and unconnected neuronal populations which are nevertheless mixed into the same voxels. Even if voxel responses were a direct measure of neuronal activity, they might not show any biased responses at all, and the voxel response pattern would therefore carry no information about the stimuli whatsoever.

But there is an even more fundamental issue here. This is also unaffected by what actual brain measure is used, be it voxel patterns or the firing rates of actual neurons. The authors’ stated goal is to reveal what measure the brain itself uses to establish the similarity of brain states. The measures they compare are statistical methods, e.g. the Pearson correlation coefficient or the Mahalanobis distance between two response patterns. But the brain is no statistician. At most, a statistical quantity like a Pearson’s r might be a good description for what some read-out neurons somewhere in the processing hierarchy do to categorise the response patterns in up-stream regions. This may sound like an unnecessarily pedantic semantic distinction, but I’d disagree: by only testing predefined statistical models of how pattern similarity could be quantified, we may impose an artificially biased set of models. The actual way this is implemented in neuronal circuits may very well be a hybrid or a completely different process altogether. Neural similarity might linearly correlate with Pearson’s r over some range, say between r=0.5-1, but then be more consistent with a magnitude code at the lower end of similarities. It might also come with built in thresholding or rectifying mechanisms in which patterns below a certain criterion are automatically encoded as dissimilar. Of course, you have to start somewhere and the models the authors used are reasonable choices. However, this description should be more circumspect in my view because in the best case we could really say that the results suggest a mechanism that is well described by a given statistical model.

Finally, the authors seem to make an implicit assumption that does not necessarily hold: there is actually no reason to accept up-front that the brain quantifies pattern similarity at all. I assume that it does, and it is certainly an important assumption to be tested empirically. But in theory it seems entirely possible that spatial patterns of neural activity in a particular brain region are an epi-phenomenon of how neurons in that region are organised. This does however not mean that downstream neurons necessarily use this pattern information. I’d wager this almost certainly also depends on the stimulus/task. For instance, a higher-level neuron whose job it is to determine whether a stimulus appeared on the left or the right presumably uses the spatial pattern of retinotopically-organised responses in the earlier visual regions. For other, more complex stimulus dimensions, this may not be the case.

The technical issue

This brings me to the other caveat I see with Bobadilla-Suarez et al.’s approach here. As I said, this is largely the same point made by Kriegeskorte in his review and since this takes up most of his post I’ll keep it brief. If we brush aside the conceptual points I made above and instead assume that the brain indeed determines the similarity of response patterns in up-stream areas, what is the best way to test how it does this? The authors used a machine learning classifier to use pair-wise decoding of different stimuli and construct a confusability matrix. Conceptually, this is pretty much the same as the similarity matrix derived from the other measures they are testing (e.g. Pearson’s r) but it instead uses a classifier algorithm the determine the discriminability of the response patterns. The authors then compare these decoding matrices with those based on the similarity measures they tested.

As Kriegeskorte suggests, these decoding methods are just another method of determining neural similarity. Different kinds of decoders are also closely related to the various methods Bobadilla-Suarez et al. compared: the Mahalanobis distance isn’t conceptually very far from a linear discriminant decoder, and you can actually build a classifier using Pearson’s r (in fact, this is the classifier I mostly used in my own studies).

The premise of Bobadilla-Suarez et al.’s study therefore seems circular. They treat decodability of neural activity patterns as the ground truth of neural similarity, and that assumption seems untenable to me. They discuss the confound that the choice of decoding algorithm would affect the results and therefore advocate using the best available algorithm, yet this doesn’t really address the underlying issue. The decoder establishes the statistical similarity between neural response patterns. It does not quantify the actual neural similarity code – as a matter of fact, it cannot possibly do so.

It is therefore also unsurprising if the similarity measure that best matches classifier performance is the method that is closest to what the given classifier algorithm is based on. I may have missed this, but I cannot discern from the manuscript which classifier was actually used for the final analyses, only that the best of three was chosen. The best classifier was determined separately for the two datasets the authors used, which could be one explanation for why their outcome results differ between them.

Summary

Bobadilla-Suarez et al. ask an interesting and important question but I don’t think the study as it is can actually address it. There is a conceptual issue in that the brain may not necessarily use any of the available statistical models to quantify neural similarity, and in fact it may not do so at all. Of course, it is perfectly valid to compare different models of how it achieves this feat and any answer to this question need not be final. It does however seem to me that this is more of a methodological comparison rather than an attempt to establish what the brain is actually doing.

To my understanding, the approach the authors used to establish which similarity measure is best cannot answer this question. In this I appear to concur with Kriegeskorte’s review. Perhaps I am wrong of course, as the authors have previously suggested that Kriegeskorte “missed the point”, in which case I would welcome further explanation of the authors’ rationale here. However, from where I’m currently standing, I would recommend that the authors revise their manuscript as a methodological comparison and to be more circumspect with regard to claims about neural representations.

The results shown here are certainly not without merit. By comparing commonly used similarity measures to the best available decoding algorithm they may not establish which measure is closest to what the brain is doing, but they certainly do show how these measures compare to complex classification algorithms. This in itself is informative for practical reasons because decoding is computationally expensive. Any squabbling aside, the authors show that the most commonly used measure, Pearson’s correlation, clearly does not perform in the same way as a lot of other possible techniques. This finding should also be of interest to anyone conducting an RSA experiment.

Some final words

I hope the authors find this comment useful. Just because I agree with Kriegeskorte’s main point, I hope that doesn’t make me his “acolyte” (I have neither been trained by him nor would I say that we stem from the same theoretical camp). I may have “missed the point” too, in which case I would appreciate further insight.

I find it very unfortunate that instead of a decent discussion on science, this debate descended into something not far above a poo-slinging contest. I have deliberately avoided taking sides in that argument because of my relationship to either side. While I vehemently object to the manner with which Brad responded to Niko’s post, I think it should be obvious that not everybody is on the same wavelength when it comes to open reviewing. It is depressing and deeply unsettling how many people on either side of this divide appear to be unwilling to even try to understand the other point of view.

Turning off comments

I have decided to turn off the comment functionality on this blog. I used to believe strongly that this would be the best place for any discussion to take place but this is clearly utopian. Most discussion about blog posts inevitably occurs on social media like Twitter and Facebook. At my advanced age I find it increasingly harder to keep track off all these multiple parallel streams and I predict soon I’ll find it even harder. Most of the comments here were often rehashing discussions I already had elsewhere as well while some of them were completely pointless. There was also the occasional joker who just took a dump on my lawn but didn’t bother to stick around for a chat. So now I am consolidating my resources. If you have a comment ping me a reply on Twitter (I always tweet out the link to a new post), respond via another blog post, or if you prefer a private conversation you can always email me.

An open review of open reviewing

a.k.a. Love v Kriegeskorten

An interesting little spat played out on science social media today. It began with a blog post by Niko Kriegeskorte, in which he posted a peer review he had conducted on a manuscript by Brad Love. The manuscript in question is publicly available as a preprint. I don’t want to go into too much detail here (you can read that all up for yourself) but Brad took issue with the fact and the manner in which Niko posted the review of their manuscript after it was rejected by a journal. A lot of related discussion also took place via Twitter (see links in Brad’s post) and on Facebook.

I must say, in the days of hyper-polarisation in everyday political and social discourse, I find this debate actually really refreshing. It is actually pretty easy to feel outrage and disagreement with a president putting children in cages or holding a whole bloody country hostage over a temper tantrum – although the fact that there are apparently still far too many people who do not feel outraged about these things is certainly a pretty damning indictment of the moral bankruptcy of the human race… Anyway, things are actually far more philosophically challenging when there is a genuine and somewhat acrimonious disagreement between two sides you respect equally. For the record, Brad is a former colleague of mine from my London days, whose work I have the utmost respect for. Niko has for years been a key player in multivariate and representational analysis and we have collaborated in the past.

Whatever my personal relationship to these people, I can certainly see both points of view in this argument. Brad seems to object mostly to the fact that Niko posted the reviews on this blog and without their consent. He regards this as a “self-serving” act. In contrast, Niko regards this as a substantial part of open review. His justification for posting this review publicly is that the manuscript is already public anyway, and that this invites public commentary. I don’t think that Brad particularly objects to public commentary, but he sees a conflict of interest in using a personal blog as a venue for this, especially since these were the peer reviews Niko wrote for a journal, not on the preprint server. Moreover, since these were reviews that led to the paper being rejected by the journal, he and his coauthors had no opportunity to reply to Niko’s reviews.

This is a tough nut to crack. But this is precisely the kind of discussion we need to have for making scientific publishing and peer review more transparent. For several years now I have argued that peer reviews should be public (even if the reviewers’ names are redacted). I believe reviewers’ comments and editorial decisions should be transparent. I’ve heard “How did this get accepted for publication?” in journal clubs just too many times. Show the world why! Not only is it generally more open but it will also make it fairer when there are challenges to the validity of an editorial decision, including dodgy decisions to retract studies.

That said, Brad certainly has a point that ethically this openness requires up-front consent from both parties. The way he sees it, he and his coauthors did not consent to publishing these journal reviews (which, in the present system, are still behind closed doors). Niko’s view is clearly that because the preprint is public, consent is implicit and this is fair game. Brad’s counterargument to this is that any comments on the preprint should be made directly on the preprint. This is separate from any journal review process and would allow the authors to consider the comments and decide if and how to respond to them. So, who is right here?

What this really comes down to is a philosophical worldview as to how openness should work and how open it should be. In a liberal society, the right to free expression certainly permits a person to post their opinions online, within certain constraints to protect people from libel, defamation, or threats to their safety. Some journals make reviewers sign a confidentiality agreement about reviews. If this was the case here, a post like this would constitute a violation of that agreement, although I am unaware of any case where this has ever been enforced. Besides, even if reviewers couldn’t publicly post their reviews and discuss the peer review of a manuscript, this would certainly not stop them from making similar comments at conferences, seminars – or on public preprints. In that regard, in my judgement Niko hasn’t done anything wrong here.

At the same time, I fully understand Brad’s frustration. I personally disagree with the somewhat vitriolic and accusatory tone of his response to Niko. This seems both unnecessary and unhelpful. But I agree with him that a personal blog is the wrong venue for posting peer reviews, regardless of whether they are from behind the closed doors of a journal review process or from the outside lawn of post-publication discussion. Obviously, nobody can stop anyone from blogging their opinion on a public piece of science (and a preprint is a public piece of science). Both science bloggers and mainstream journalists constantly write about published research, including preprints that haven’t been peer reviewed. Twitter is frequently ablaze with heated discussion about published research. And I must say that when I first skimmed Niko’s post, I didn’t actually realise that this was a peer review, let along one he had submitted to a journal, but simply thought it was his musings about the preprint.

The way I see it, social media aren’t peer review but mere opinion chatter. Peer review requires some established process. Probably this should have some editorial moderation – but even without that, at the very least there should be a constant platform for the actual review. Had Niko posted his review as a comment on the preprint server, this would have been entirely acceptable. In an ideal world, he would have done that after writing it instead of waiting for the journal to formally reject the manuscript*. This isn’t to say that opinion chatter is wrong. We do it all the time and talking about a preprint on Twitter is not so different from discussing a presentation you saw at a conference or seminar. But if we treat any channel as equivalent for public peer review, we end up with a mess. I don’t want to constantly track down opinions, some of which are vastly ill-informed, all over the wild west of the internet.

In the end, this whole debacle just confirms my already firmly held belief (Did you expect anything else? šŸ˜‰ ) that the peer review process should be independent from journals altogether. What we call preprints today should really be the platform where peer review happens. There should be an editor/moderator to ensure a decent and fair process and facilitate a final decision (because the concept of eternally updating studies is unrealistic and infeasible). However, all of this should happen in public. Importantly, journals only come into play at the end, to promote research they consider interesting and perhaps some nice editing and formatting.

The way I see it, this is the only way. Science should happen out in the open – including the review process. But what we have here is a clash between promoting openness in a world still partly dominated by the traditional way things have always been done. I think Niko’s heart was in the right place here but by posting his journal reviews on his personal blog he effectively went rogue, or took the law into his own hands, if you will. Perhaps this is the way the world changes but I don’t think this is a good approach. How about we all get together and remake the laws. They are for us scientists after all, to determine how science should work. It’s about time we start governing ourselves.

Addendum:
I want to add links to two further posts opining on this issue, both of which make important points. First, Sebastian Bobadilla-Suarez, the first author of the manuscript in question, wrote a blog post about his own experiences, especially from the perspective of an early career researcher. Not only are his views far more important, but I actually find his take far more professional and measured than Brad’s post.
Secondly, I want to mention another excellent blog post on this whole debacle by Edwin Dalmaijer which very eloquently summarises this situation. From what I can tell, we pretty much agree in general but Edwin makes a number of more concrete points compared to my utopian dreams of how I would hope things should work.

Massaging data to fit a theory is antithetical to science

I have stayed out of the Wansink saga for the most part. If you don’t know what this is about, I suggest reading about this case on Retraction Watch. I had a few private conversations about this with Nick Brown, who has been one of the people instrumental in bringing about a whole series of retractions of Wansink’s publications. I have a marginal interest in some of Wansink’s famous research, specifically whether the size of plates can influence how much a person eats, because I have a broader interest in the interplay between perception and behaviour.

But none of that is particularly important. The short story is that considerable irregularities have been discovered in a string of Wansink’s publications, many of which has since been retracted. The whole affair first kicked off with a fundamental own-goal of a blog post (now removed, so posting Gelman’s coverage instead) he wrote in which he essentially seemed to promote p-hacking. Since then the problems that came to light ranged from irregularities (or impossibility) of some of the data he reported, evidence of questionable research practices in terms of cherry-picking or excluding data, to widespread self-plagiarism. Arguably, not all of these issues are equally damning and for some the evidence is more tenuous than for others – but the sheer quantity of problems is egregious. The resulting retractions seem entirely justified.

Today I read an article on Times Higher Education entitled “Massaging data to fit a theory is not the worst research sin” by Martin Cohen, which discusses Wansink’s research sins in a broader context of the philosophy of science. The argument is pretty muddled to me so I am not entirely sure what the author’s point is – but the effective gist seems to shrug off concerns about questionable research practices and that Wansink’s research is still a meaningful contribution to science.Ā  In my mind, Cohen’s article reflects a fundamental misunderstanding of how science works and in places sounds positively post-Truthian. In the following, I will discuss some of the more curious claims made by this article.

“Massaging data to fit a theory is not the worst research sin”

I don’t know about the “worst” sin. I don’t even know if science can have “sins” although this view has been popularised by Chris Chamber’s book and Neuroskeptic’s Circles of Scientific Hell. Note that “inventing data”, a.k.a. going Full-Stapel, is considered the worst affront to the scientific method in the latter worldview. “Massaging data” is perhaps not the same as outright making it up, but on the spectrum of data fabrication it is certainly trending in that direction.

Science is about seeking the truth. In Cohen’s words, “science should above all be about explanation”. It is about finding regularities, relationships, links, and eventually – if we’re lucky – laws of nature that help us make sense of a chaotic, complex world. Altering, cherry-picking, or “shoe-horning” data to fit your favourite interpretation is the exact opposite of that.

Now, the truth is that p-hacking,Ā  the garden of forking paths, flexible outcome-contingent analyses fall under this category. Such QRPs are extremely widespread and to some degree pervade most of the scientific literature. But just because it is common, doesn’t mean that this isn’t bad. Massaging data inevitably produces a scientific literature of skewed results. The only robust way to minimise these biases is through preregistration of experimental designs and confirmatory replications. We are working towards that becoming more commonplace – but in the absence of that it is still possible to do good and honest science.

In contrast, prolifically engaging in such dubious practices, as Wansink appears to have done, fundamentally undermines the validity of scientific research. It is not a minor misdemeanour.

“We forget too easily that the history of science is rich with errors”

I sympathise with the notion that science has always made errors. One of my favourite quotes about the scientific method is that it is about “finding better ways of being wrong.” But we need to be careful not to conflate some very different things here.

First of all, a better way of being wrong is an acknowledgement that science is never a done deal. We don’t just figure out the truth but constantly seek to home in on it. Our hypotheses and theories are constantly refined, hopefully by gradually becoming more correct, but there will also be occasional missteps down a blind alley.

But these “errors” are not at all the same thing as the practices Wansink appears to have engaged in. These were not mere mistakes. While the problems with many QRPs (like optional stopping) have long been underappreciated by many, a lot of the problems in Wansink’s retracted articles are quite deliberate distortions of scientific facts. For most, he could have and should have known better. This isn’t the same as simply getting things wrong.

The examples Cohen offers for the “rich errors” in past research are also not applicable. Miscalculating the age of the earth or presenting an incorrect equation are genuine mistakes. They might be based on incomplete or distorted knowledge. Publishing an incorrect hypothesis (e.g., that DNA is a triple helix) is not the same as mining data to confirm a hypothesis. It is perfectly valid to derive new hypotheses, even if they turn out to be completely false. For example, I might posit that gremlins cause the outdoor socket on my deck to fail. Sooner or later, a thorough empirical investigation will disprove this hypothesis and the evidence will support an alternative, such as that the wiring is faulty. The gremlin hypothesis may be false – and it is also highly implausible – but nothing stops me from formulating it. Wansink’s problem wasn’t with his hypotheses (some of which may indeed turn out to be true) but with the irregularities in the data he used to support them.

“Underlying it all is a suspicion that he was in the habit of forming hypotheses and then searching for data to support them”

Ahm, no. Forming hypotheses before collecting data is how it’s supposed to work. Using Cohen’s “generous perspective”, this is indeed how hypothetico-deductive research works. In how far this relates to Wansink’s “research sin” depends on what exactly is meant here by “searching for data to support” your hypotheses. If this implies you are deliberately looking for data that confirms your prior belief while ignoring or rejecting observations that contradict it, then that is not merely a questionable research practice, but antithetical to the whole scientific endeavour itself. It is also a perfect definition of confirmation bias, something that afflicts all human beings to some extent, scientists included. Scientists must find protections from fooling themselves in this way and that entails constant vigilance and scepticism of our own pet theories. In stark contrast, engaging in this behaviour actively and deliberately is not science but pure story-telling.

The critics are not merely “indulging themselves in a myth of neutral observers uncovering ‘facts'”. Quite to the contrary, I think Wansink’s critics are well aware of the human fallibility of scientists. People are rarely perfectly neutral when it pertains to hypotheses. Even when you are not emotionally invested in which one of multiple explanations for a phenomenon might be correct, they are frequently not equal in terms of how exciting it might be to confirm them. Finding gremlins under my deck would certainly be more interesting (and scary?) than evidence of faulty wiring.

But in the end, facts are facts. There are no “alternative facts”. Results are results. We can differ on how to interpret them but that doesn’t change the underlying data. Of course, some data are plainly wrong because they come from incorrect measurements, artifacts, or statistical flukes. These results are wrong. They aren’t facts even if we think of them as facts at the moment. Sooner or later, they will be refuted. That’s normal. But this is a long shot from deliberately misreporting or distorting facts.

“…studies like Wansinkā€™s can be of value if they offer new clarity in looking at phenomena…”

This seems to be the crux of Cohen’s argument. Somehow, despite all the dubious and possibly fraudulent nature of his research, Wansink still makes a useful contribution to science. How exactly? What “new clarity” do we gain from cherry-picked results?

I can see though that Wansink may “stimulate ideas for future investigations”. There is no denying that he is a charismatic presenter and that some of his ideas were ingenuous. I like the concept of self-filling soup bowls. I do think we must ask some critical questions about this experimental design, such as whether people can be truly unaware that the soup level doesn’t go down as they spoon it up. But the idea is neat and there is certainly scope for future research.

But don’t present this as some kind of virtue. By all means, give credit to him for developing a particular idea or a new experimental method. But please, let’s not pretend that this excuses the dubious and deliberate distortion of the scientific record. It does not justify the amount of money that has quite possibly been wasted on changing how people eat, the advice given to schools based on false research. Deliberately telling untruths is not an error, it is called a lie.

1024px-gremlins_think_it27s_fun_to_hurt_you-_use_care_always-_back_up_our_battleskies5e_-_nara_-_535381