Monthly Archives: March 2015

The TES Challenge to Greg Francis

This post is a follow-up to my previous post, “Statistical alchemy and the ‘test for excess significance’”. In the comments on that post, Greg Francis objected to my points about the Test for Excess Significance. I laid out a challenge in which I would use simulation to demonstrate these points. Greg Francis agreed to the details; this post is about the results of the simulations (with links to the code, etc.)


A challenge

In my previous post, I said this:

Morey: “…we have bit of a mystery. That $E$ [the expected number of non-significant studies in a set of $n$ studies] equals the sum of the expected [Type II error] probabilities is merely asserted [by Ioannidis and Trikalinos]. There is no explanation of what assumptions were necessary to derive that fact. Moreover, it is demonstrably false.”

Greg Francis replied:

Francis:“…none of your examples of the falseness of the equation are valid because you fix the number of studies to be n, which is inconsistent with your proposed study generation process. Your study generation process works if you let n vary, but then the Ioannidis & Trikalinos formula is shown to be correct…[i]n short, you present impossible sampling procedures and then complain that the formula proposed by Ioannidis & Trikalinos does not handle your impossible situations.”

To which I replied,

Morey:“If you don’t believe me, here’s a challenge: you pick a power and a random seed. I will simulate a very large ‘literature’ according to the ‘experimenter behaviour’ of my choice, importantly with no publication bias or other selection of studies. I will guarantee that I will use a behaviour that will generate experiment set sizes of 5. I will save the code and the ‘literature’ coded in terms of ‘sets’ of studies and how many significant and nonsignificant studies there are. You get to guess what the average number of significant studies are in sets of 5 via I&T’s model, along with a 95% CI (I’ll tell you the total number of such studies). That is, we’re just using Monte Carlo to estimate the expected number of significant studies in sets of experiments n=5; that is, precisely what I&T use as the basis of their model (for the special case of n=5).” “This will answer the question of ‘what is the expected number of nonsignificant studies in a set of n?’”

This challenge will very clearly show that my situations are not “impossible”. I can sample them in a very simple simulation. Greg Francis agreed to the simulation:

Francis: “Clearly at least one of us is confused. Maybe we can sort it out by trying your challenge. Power=0.5, random seed= 19374013”

I further clarified:

Morey: “Before I do this, though, I want to make sure that we agree on what this will show. I want to show that the expected number of nonsignificant studies in a set of n (=5) studies is not what I&T say it is, and hence, the reasoning behind the test is flawed (because ‘excess significance’ is defined as deviation from this expected number). I also want to be clear what the prediction is here: Since the power of the test is .5, according to I&T, the expected number of nonsignificant studies in a set of 5 is 2.5. Agreed?”

…to which Greg Francis agreed.

I have performed this simulation. Before reading on, you should read the web page containing the results:

The table below shows the results of the simulation of 1000000 “sets” of studies. All simulated “studies” are published in this simulation, no questionable research practices are involved. The first column shows (n), and the second column shows the average number of non-significant studies for sets of (n), which is a Monte Carlo estimate of I&T’s (E). As you can see, it is not 2.5.

Total studies (n)  Mean nonsig. studies  Expected by TES (E)  SD nonsig. studies  Count
1 1 0.5 0 499917
2 1 1.0 0 249690
3 1 1.5 0 125269
4 1 2.0 0 62570
5 1 2.5 0 31309
6 1 3.0 0 15640
7 1 3.5 0 7718
8 1 4.0 0 3958
9 1 4.5 0 1986
10 1 5.0 0 975

(I have truncated the table at (n=10); see the HTML file for the full table.)

I also showed that you can change the experimenter’s behaviour and make it 2.5. This indicates that the assumptions one makes about experimenter behavior matter to the expected number of non-significant studies in a particular set. Across all sets of studies, the expected proportion of significant studies is expected to be equal to the power. However, how this is distributed across studies of different lengths is a function of the decision rule.

The expression for the expected number of non-significant studies in a set of (n) is not correct (without further very strong, unwarranted assumptions).

mrt

Two things to stop saying about null hypotheses

There is a currently fashionable way of describing Bayes factors that resonates with experimental psychologists. I hear it often, particularly as a way to describe a particular use of Bayes factors. For example, one might say, “I needed to prove the null, so I used a Bayes factor,” or “Bayes factors are great because with them, you can prove the null.” I understand the motivation behind this sort of language but please: stop saying one can “prove the null” with Bayes factors.

I also often hear other people say “but the null is never true.” I’d like to explain why we should avoid saying both of these things.


Null hypotheses are tired of your jibber jabber

Why you shouldn’t say “prove the null”

Statistics is complicated. People often come up with colloquial ways of describing what a particular method is doing: for instance, one might say a significance tests give us “evidence against the null”; one might say that a “confidence interval tells us the 95% most plausible values”; or one might say that a Bayes factor helps us “prove the null.” Bayesians often are quick to correct misconceptions that people use to justify their use of classical or frequentist methods. It is just as important to correct misconceptions about Bayesian methods.

In order to understand why we shouldn’t say “prove the null”, consider the following situation: You have a friend who claims that they can affect the moon with their mind. You, of course, think this is preposterous. Your friend looks up at the moon and says “See, I’m using my abilities right now!” You check the time.

You then decide to head to the local lunar seismologist, who has good records of subtle moon tremors. You ask her whether about what happened at the time your friend was looking at the moon, and she reports back to you that lunar activity at that time was stronger than it typically is 95% of the time (thus passes the bar for “statistical significance”).

Does this mean that there is evidence for your friend’s assertion? The answer is “no.” Your friend made no statement about what one would expect from the seismic data. In fact, your friend’s statement is completely unfalsifiable (as is the case with the typical “alternative” in a significance test, (muneq0)).

But consider the following alternative statements your friend could have made: “I will destroy the moon with my mind”; “I will make very large tremors (with magnitude (Y))”; “I will make small tremors (with magnitude (X)).” How do we now regard your friend’s claims in light of the what happened?

  • “I will destroy the moon with my mind” is clearly inconsistent with the data. You (the null) are supported by an infinite amount, because you have completely falsified his statement that he would destroy the moon (the alternative).
  • “I will make very large tremors (with magnitude (Y))” is also inconsistent with the data, but if we allow a range of uncertainty around his claim, may not be completely falsified. Thus you (the null) are supported, but not by as much in the first situation.
  • “I will make small tremors (with magnitude (X))” may support you (the null) or your friend (the alternative), depending on how the magnitude predicted and observed.

Here we can see that the support for the null depends on the alternative at hand. This is, of course, as it must be. Scientific evidence is relative. We can never “prove the null”: we can only “find evidence for a specified null hypothesis against a reasonable, well-specified alternative”. That’s quite a mouthful, it’s true, but “prove the null” creates misunderstandings about Bayesian statistics, and makes it appear that it is doing something it cannot do.

In a Bayesian setup, the null and alternative are both models and the relative evidence between them will change based on how we specify them. If we specify them in a reasonable manner, such that the null and alternative correspond to relevant theoretical viewpoints or encode information about the question at hand, the relative statistical evidence will be informative for our research ends. If we don’t specify reasonable models, then the relative evidence between the models may be correct, but useless.

We never “prove the null” or “compute the probability of the null hypothesis”. We can only compare a null model to an alternative model, and determine the relative evidence.

[See also Gelman and Shalizi (2013) and Morey, Romeijn and Rouder (2013)]

Why you shouldn’t say “the null is never true”

A common retort to tests including a point null (often called a ‘null’ hypothesis) is that “the null is never true.” This backed up by four sorts of “evidence”:

  • A quote from an authority: “Tukey or Cohen said so!” (Tukey was smart, but this is not an argument.)
  • Common knowledge / “experience”: “We all know the null is impossible.” (This was Tukey’s “argument”)
  • Circular: “The area under a point in a density curve is 0.” (Of course if your model doesn’t have a point null, the point null will be impossible.)
  • All models are “false” (even if this were true — I think it is actually a category error — it would equally apply to all alternatives as well)

The most attractive seems to be the second, but it should be noted that people almost never use techniques that allow finding evidence for null hypotheses. Under these conditions, how is one determining that the null is never true? If a null were ever true, we would not be able to accumulate evidence for it, so the second argument definitely has a hint of circularity as well.

When someone says “The null hypothesis is impossible/implausible/irrelevant”, what they are saying in reality is “I don’t believe the null hypothesis can possibly be true.” This is a totally fine statement, as long as we recognize it for what it is: an a priori commitment. We should not pretend that it is anything else; I cannot see any way that one can find universal evidence for the statement “the null is impossible”.

If you find the null hypothesis implausible, that’s OK. Others might not find it implausible. It is ultimately up to substantive experts to decide what hypotheses they want to consider in their data analysis, and not up to methodologists or statisticians to decide to tell experts what to think.

Any automatic behavior — either automatically rejecting all null hypothesis, or automatically testing null hypotheses — is bad. Hypothesis testing and estimation should be considered and deliberate. Luckily, Bayesian statistics allows both to be done in a principled, coherent manner, so informed choices can be made by the analyst and not by the restrictions of the method.

BayesFactor updated to version 0.9.11-1

The BayesFactor package has been updated to version 0.9.11-1. The changes are:

  CHANGES IN BayesFactor VERSION 0.9.11-1

CHANGES
  * Fixed memory bug causing importance sampling to fail.

  CHANGES IN BayesFactor VERSION 0.9.11

CHANGES
  * Added support for prior/posterior odds and probabilities. See the new vignette for details.
  * Added approximation for t test in case of large t
  * Made some error messages clearer
  * Use callbacks at least once in all cases
  * Fix bug preventing continuous interactions from showing in regression Gibbs sampler
  * Removed unexported function oneWayAOV.Gibbs(), and related C functions, due to redundancy
  * gMap from model.matrix is now 0-indexed vector (for compatibility with C functions)
  * substantial changes to backend, to Rcpp and RcppEigen for speed
  * removed redundant struc argument from nWayAOV (use gMap instead)