Category Archives: precognition

On studying precognition

Today I received an email from somebody who had read some of my discussions of Psi research. They made an interesting point that has so far been neglected in most of the debates I participated in. With their permission I post their email (without identifying information) and my response to it. I hope this clarifies my views:

I also have experienced real and significant episodes of precognition. After many experiences I researched my ancestry and found relatives who had histories of episodes of precognition. The studies I have read that claim precognition is not real all have the same error. I can’t pick a card, I can’t tell you what the next sound will be. Precognition does not work like that. I will demonstrate with this example.
I was standing at the front desk at work when I got a terrible feeling something was wrong. I didn’t know what. I called a friend and told him something is wrong.  I began a one hour drive home and continued talking to my friend. The feeling that something was wrong grew to an increasing level as a approached the river. I saw a city bus parked on the side of the road. Many vehicles were down by the river. I passed that scene and then told my friend that a child was now drowned and she was close to the bridge about 1/2 a mile down river. The next day the TV news confirmed that she was found at the bridge down river.
No one told me there was a drowning, no one told me it was a girl, no one knew she was floating by the bridge.
This type of thing happens to me regularly. I believe it results from the same thing that will stampede cattle. I think humans communicate through speech and other forms of non verbal communication. I think somehow I am able to know what the herd is thinking or saying without being there. I think the reason I got the feeling something was wrong had to do with the escalating fear and crying out of the people who were madly searching for the child who fell in the river.
So trying to study precognition by getting a person to predict the next card will never work. Look at the reality of how it happens and see if you can study it a different way.

My response to this:

Thank you for your email. I’d say we are in greater agreement than you may think. What I have written on my blog and in the scientific literature about precognition/telepathy/presentiment pertains strictly to the scientific experiments that have been done on these paranormal abilities, usually with the sole aim to prove their existence. You say you “can’t pick a card” etc – tell that to the researchers who believe that showing a very subtle difference from chance performance on such simple experiments is evidence for precognition.
Now, do I believe you have precognition? No, I don’t. The experiences you describe are not uncommon but they may be uncommonly frequent for you. Nevertheless they are anecdotal evidence and my first hunch would be to suspect cognitive biases that we know can masquerade as paranormal abilities. There may also be cognitive processes we currently simply have no understanding of. How we remember our own thoughts is still very poorly understood. The perception of causality is a fascinating topic. We know we can induce causality illusions but this line of inquiry is still in its infancy.
But I cannot be certain of this. Perhaps you do have precognition. I don’t have any intention to convince you that you don’t; I only want to clarify why I don’t believe it, certainly not based on the limited information I have. The main issue here is that your precognition is unfalsifiable. You say yourself that “Precognition does not work like that.” If it does not occur with the same regularity as other natural phenomena, it isn’t amenable to scientific study. Psi researchers believe that precognition etc have that regularity and so they think you can demonstrate it with card-picking experiments. My primary argument is about that line of thinking.
I am not one of those scientists who feel the need to tell everyone what to believe in. These people are just as irritating as religious fundamentalists who seek to convert everybody. If some belief is unfalsiable, like the existence of God or your belief in your precognition, then it falls outside the realm of science. I have no problem with you believing that you have precognition, certainly not as long as it doesn’t cause any harm to anyone. But unless we can construct a falsifiable hypothesis, science has no place in it.

Experimenter effects in replication efforts

I mentioned the issue of data quality before but reading Richard Morey’s interesting post about standardised effect sizes the other day made me think about this again. Yesterday I gave a lecture discussing Bem’s infamous precognition study and the meta-analysis he recently published of the replication attempts. I hadn’t looked very closely at the meta-analysis data before but for my lecture I produced the following figure:Bem-Meta

This shows the standardised effect size for each of the 90 results in that meta-analysis split into four categories. On the left in red we have the ten results by Bem himself (nine of which are his original study and one is a replication of one of them by himself). Next, in orange we have what they call ‘exact replications’ in the meta-analysis, that is, replications that used his program/materials. In blue we have ‘non-exact replications’ – those that sought to replicate the paradigms but didn’t use his materials. Finally, on the right in black we have what I called ‘different’ experiments. These are at best conceptual replications because they also test whether precognition exists but use different experiment protocols. The hexagrams denote the means across all the experiments in each category (these are non-weighted means but it’s not that important for this post).

While the means for all categories are evidently greater than zero, the most notable thing should be that Bem’s findings are dramatically different from the rest. While the mean effect size in the other categories are below or barely at 0.1 and there is considerable spread beyond zero in all of them, all ten of Bem’s results are above zero and, with one exception, above 0.1. This is certainly very unusual and there are all sorts of reasons we could discuss for why this might be…

But let’s not. Instead let’s assume for the sake of this post that there is indeed such a thing as precognition and that Daryl Bem simply knows how to get people to experience it. I doubt that this is a plausible explanation in this particular case – but I would argue that for many kinds of experiments such “experimenter effects” are probably notable. In an fMRI experiment different labs may differ considerably in how well they control participants’ head motion or even simply in terms of the image quality of the MRI scans. In psychophysical experiments different experimenters may differ in how well they explain the task to participants or how meticulous they are in ensuring that they really understood the instructions, etc. In fact, the quality of the methods surely must matter in all experiments, whether they are in astronomy, microbiology, or social priming. Now this argument has been made in many forms, most infamously perhaps in Jason Mitchell’s essay “On the emptiness of failed replications” that drew much ire from many corners. You may disagree with Mitchell on many things but not on the fact that good methods are crucial. What he gets wrong is laying the blame for failed replications solely at the feet of “replicators”. Who is to say that the original authors didn’t bungle something up?

However, it is true that all good science should seek to reduce noise from irrelevant factors to obtain as clean observations as possible of the effect of interest. Using again Bem’s precognition experiments as an example, we could hypothesise that he indeed had a way to relax participants to unlock their true precognitive potential that others seeking to replicate his findings did not. If that were true (I’m willing to bet a fair amount of money that it isn’t but that’s not the point), if true, this would indeed mean that most of the replications – failed or successful – in his meta-analysis are only of low scientific value. All of these experiments are more contaminated by noise confounds than his experiments; thus only he provides clean measurements. Standardised effect sizes like Cohen’s d divide the absolute raw effect by a measure of uncertainty or dispersion in the data. The dispersion is a direct consequence of the noise factors involved. So it should be unsurprising that the effect size is greater for experimenters that are better at eliminating unnecessary noise.

Statistical inference seeks to estimate the population effect size from a limited sample. Thus, a meta-analytic effect size is an estimate of the “true” effect size from a set of replications. But since this population effect includes the noise from all the different experimenters, it does not actually reflect the true effect? The true effect is people’s inherent precognitive ability. The meta-analytic effect size estimate is spoiling that with all the rubbish others pile on with their sloppy Psi experimentation skills. Surely we want to know the former not the latter? Again, for precognition most of us will probably agree that this is unlikely – it seems more trivially explained by some Bem-related artifact – but in many situations this is a very valid point: Imagine one researcher manages to produce a cure for some debilitating disease but others fail to replicate it. I’d bet that most people wouldn’t run around shouting “Failed replication!”, “Publication bias!”, “P-hacking!” but would want to know what makes the original experiment – the one with the working drug – different from the rest.

The way I see that, meta-analysis of large scale replications is not the right way to deal with this problem. Meta-analysis of one lab’s replications are worthwhile, especially as a way to summarise a set of conceptually related experiments – but then you need to take them with a grain of salt because they aren’t independent replications. But large-scale meta-analysis across different labs don’t really tell us all that much. They simply don’t estimate the effect size that really matters. The same applies to replication efforts (and I know I’ve said this before). This is the point on which I have always sympathised with Jason Mitchell: you cannot conclude a lot from a failed replication. A successful replication that nonetheless demonstrates that the original claim is false is another story but simply failing to replicate some effect only tells you that something is (probably) different between the original and the replication. It does not tell you what the difference is.

Sure, it’s hard to make that point when you have a large-scale project like Brian Nosek’s “Estimating the reproducibility of psychological science” (I believe this is a misnomer because they mean replicability not reproducibility – but that’s another debate). Our methods sections are supposed to allow independent replication. The fact that so few of their attempts produced significant replications is a great cause for concern. It seems doubtful that all of the original authors knew what they were doing and so few of the “replicators” did. But in my view, there are many situations where this is not the case.

I’m not necessarily saying that large-scale meta-analysis is entirely worthless but I am skeptical that we can draw many firm conclusions from it. In cases where there is reasonable doubt about differences in data quality or experimenter effects, you need to test these differences. I’ve repeatedly said that I have little patience for claims about “hidden moderators”. You can posit moderating effects all you want but they are not helpful unless you test them. The same principle applies here. Rather than publishing one big meta-analysis after another showing that some effect is probably untrue or, as Psi researchers are wont to do, in an effort to prove that precognition, presentiment, clairvoyance or whatever are real, I’d like to see more attempts to rule out these confounds.

In my opinion the only way to do this is through adversarial collaboration. If an honest skeptic can observe Bem conduct his experiments, inspect his materials, and analyse the data for themselves and yet he still manages to produce these findings, that would go a much longer way convincing me that these effects are real than any meta-analysis ever could.

Humans are dirty test tubes

 

 

The Objectivity Illusion

The other day I posted an angry rant opinion about this whole sorry “trouble with girls” debacle. Don’t worry, I won’t write any more about this bullshitstorm “debate” in any further detail. There just wouldn’t be any point. I will, however, write a general note about perception and debates.

As a psychic (debating with psi people is contagious), I knew from the start that the reaction to my post would serve as a perfect example of what I was talking about. Or perhaps I planned this all along? It was an online experiment to show how any given piece of speech can be understood in at least as many different ways as there are people listening. (Actually, if that were true, Chris Chambers and Dorothy Bishop would call this HARKing and tell me I should have preregistered my hypothesis – so I can’t claim to have predicted this really :/)

In all seriousness though, the reactions – including the single commenter on my post – illustrate how people can take just about anything from what you say. People just hear what they want to hear – even if they really don’t like what they’re hearing. While my post expressed no endorsement or defense of any one side in that debate, certain readers immediately jumped to conclusions based on their entrenched philosophical/political stance. I obviously have an opinion on this affair and repeatedly explained that I wouldn’t state it. Unlike for the brainless jokers and paranoid nutcases that populate both sides of this twitter fight (some notable exceptions notwithstanding), my opinion on this is a bit more complex and thus I’d be here all day and I seriously have no appetite for this.

My post wasn’t about that though. It was about the idiocy and total irrelevance of whether Tim Hunt did or did not utter certain words in his speech. It stressed the pointlessness of arguing over who “lied” about their account of things when there is no factual record to compare it to and no tangible evidence to prove that somebody was deliberately distorting the truth. These things are pointless because they really don’t matter one iota and don’t address the actual issues.

As I discussed previously, our view of the world and our reactions to it are inherently biased. This is completely normal and defines the human condition. I don’t think we can entirely overcome these biases – and it isn’t even a given that this would be a good thing. The kinds of perceptual biases my colleagues and I study in the lab (things like this) exist for good reasons – even if the reasons remain in many cases highly controversial. They could reflect our statistical experience of the environment. Alternatively (and this may in fact not be mutually exclusive) they could be determined by very fundamental processes in the brain that backfire when they encounter these particular situations. In this way, perceptual illusions reveal the hidden workings by which your brain makes sense of the world.

Discussions and catfights, like those about climate change, gun control, religious liberty, about psi research, Bayes vs frequentism, or the comments made by certain retired professors are no different. Social media makes them particularly vitriolic and incendiary. I don’t know if this is because social media actually makes them worse or if this is just because it makes the worst more visible. Either way, fights like this are characterised by the same kinds of biases that distort all other perception and behaviour. People are amazingly resistant to factual evidence. You can show somebody some very clear data refuting their preconceived notions and yet they won’t budge. It may even drive them deeper into their prior beliefs. Perhaps there is some sort of Bayesian explanation for this phenomenon – if so, I’d like to hear it. Anyway, if there is one thing you can trust, it is that objectivity is an illusion.

Now as I’ve said, such cognitive and perceptual biases are normal and can’t be prevented. But I think all is not lost. I do believe they can be counteracted – to some extent at least – if we remain vigilant of them. We may even make them work for us instead of being swayed by them. I am wondering about ways to achieve that. Any ideas are welcome, I’d be happy to chat about this. Here is the first principle though according to the (biased) worldview of Sam:

If anyone tells you that they are objective, that their account is “investigative” or “forensic” or “factual,” or if they tell you outright that the other side is lying, then it doesn’t matter who they are or what credentials they may have. They are blinkered fools, they are wrong by definition, and they don’t deserve a second of your time.

Conversations about Psi

Please note that this post is a re-post from my lab webpage. I removed it from there because the opinions expressed here are my own and shouldn’t be taken to reflect those of my lab members.

In 2014 I was drawn into debates with various parapsychologists about purported extrasensory perception, such as precognition, telepathy, or clairvoyance (also frequently referred to as “Psi”). It is important to note that there is nothing wrong per se with studying such phenomena. For some “mainstream” researchers even talking about these topics seems to have a stigma and such studies are sometimes ignored. Even though I think many of the claims from para-psychology research are preposterous, ignoring or shunning hypotheses should not be part of the scientific method. Here is a quote by Carl Sagan about a person who had put forth an implausible theory about the solar system:

“Science is a self-correcting process. To be accepted, new ideas must survive the most rigorous standards of evidence and scrutiny. The worst aspect of the Velikovsky affair is not that many of his ideas were wrong or silly or in gross contradiction to the facts.

Rather, the worst aspect is that some scientists attempted to suppress Velikovsky’s ideas. The suppression of uncomfortable ideas may be common in religion or in politics, but it is not the path to knowledge. And there’s no place for it in the endeavor of science.”

Carl Sagan’s Cosmos, Episode 4, Heaven and Hell

So-called Psi phenomena are all fairly common human experiences and therefore gaining a better understanding of them will doubtless advance our general understanding of how the mind works. Critically though, such study calls for an open-minded approach that allows us to see past our preconceptions (I am fully aware of the irony of this statement: failing to keep an open mind is a criticism parapsychologists frequently level against “skeptics”). It requires taking seriously all the possible explanations and working gradually from the bottom up until we have a theory with adequate explanatory power.

Most Psi experiences probably have a very simple explanation. Some observations may indeed be evidence of some process we don’t currently understand; however, the vast majority most likely aren’t. It is far more plausible that the mechanisms by which our brain tries to make sense of the world around us can go wrong occasionally and thus give rise to experiences that seem to contradict physical reality. We know the brain allows a form of precognition, which is called making educated guesses. It also has a kind of telepathic ability to infer what another person is thinking or feeling – this is known as theory of mind. And it even allows clairvoyance of a sort by tapping the endless power of the imagination. Moreover, we know that the human mind is very poor at detecting randomness, precisely because it has evolved to be excellent at detecting patterns, a crucial skill for ensuring survival in a cluttered, chaotic environment. Our intuitions also frequently make us fall for simple logical fallacies and even people with statistical training are not immune to this. By investigating and scrutinising Psi experience in these terms we can learn a lot about the mind and the brain. However, it is when this cautious approach is replaced by the aim to support the existence of a “statistical anomaly that has no mundane explanation” that things go haywire. This is when psychology turns into parapsychology*. It is my estimation that most research on Psi does not aim for a better understanding of the cosmos. Rather, it strives to perpetually maintain the status quo of not-understanding.

As for many “mainstream” scientists, my interest in this line of research was originally sparked by the publication of a study by Daryl Bem in a major psychology journal about apparent precognition effects. I used some of his original data for an inferential method I have been developing because I felt that the implausibility of his findings made for a very good demonstration of how statistical procedures can fail. However, as I outlined above, there is also a wider philosophical aspect to this entire debate in that much of the parapsychology literature seems to violate fundamental principles of the scientific method: Occam’s Razor and informed skepticism. I was thus drawn into debating these issues with some of these researchers. Here I will list the various publications and posts I have written as part of this discussion.

We should have seen this coming – A commentary on Mossbridge et al (2014), published in Front. Hum. Neurosci. The original authors published this response on 15 January 2015.

Why presentiment has not been demonstrated – Additional clarifications on my Frontiers commentary

I saw this coming – A counter-response to the response by Mossbridge et al (written before theirs was published)

Physics, methods, and Psi – A response to a blog post about Psi by Jacob Jolij

Finally, I published an external blog post arguing why I feel Psi is not a legitimate hypothesis. This was also in response to Jacob Jolij as well as a general response to Mossbridge et al and Bem.

I was also asked to review an EEG study investigating telepathic links between individuals. This journal (F1000 Research) has a unique model of transparency. All of the reviews are post-publication and thus visible to all. Critically, all the raw data of the study are also publicly available allowing the reviewers (or anyone else) to inspect it. You can read the various versions of that manuscript and the review discussion here.

*) Some people use parapsychology to simply mean the scientific investigation of purported paranormal or psychic phenomena and perhaps this is the traditional meaning of the term. This seems odd to me however. Such investigation falls squarely within the area of “mainstream” science. The addition of the “para” prefix separates such investigations unnecessarily from the broader scientific community. It is my impression that many para-psychologists do base their research on the Psi assumption, despite protestations to the contrary, and that they are mainly concerned with convincing others that Psi exists.