Students often think that statistics is "all about" hypothesis testing, because that is what instructors in their non-stat courses most often expect them to do with it. There is much more to statistics than hypothesis testing, and some statisticians are of the opinion that hypothesis testing is not even an important part of statistics.

Here are some things you should remember about hypothesis testing:

  • Hypotheses are never about the samples. We can see what's true in the samples. Hypotheses are always about the population or "general case."
  • On the other hand, when we do a hypothesis test, reject the null hypothesis, and conclude that means are "signficantly different," THAT is a statement about the samples.
  • Data are not signficant. Results are not signficant. The analysis is not signficant. DIFFERENCES are signficant. (Sometimes we also say that effects are signficiant, meaning that the effect indicates a signficant difference.)
  • A signficant difference means that the observed difference in some statistic between two (or more) samples is PROBABLY NOT DUE TO RANDOM CHANCE. It does not mean that this difference was caused by the independent variable, and it most certainly does not "prove" that the groups are different or that the hypothesis is correct.
  • A signficant difference (or effect) "supports" or "confirms" the experimental hypothesis. Never, EVER say that the hypothesis has been proven as the result of a single hypothesis test. Remember, the hypothesis is about the population, and we have not seen the population. We've only seem a small piece of it--the sample. What's true or false in the sample does not prove anything about the population.
  • A hypothesis test may lead you to the wrong conclusion in two ways, as follows:
    • You may conclude that the null hypothesis is false when, in fact, it is true. This is called a Type I error. If the null hypothesis is, in fact true, the probability of committing a Type I error is determined by (and equal to) the alpha level, usually .05. That means 5%, or 1 in 20, true null hypotheses end up being rejected by hypothesis tests! That's the nature of the beast. There is nothing you can do about it (other than lower the alpha level, which has other unfortunate consequences).
    • You may conclude that the null hypothesis is true when, in fact, it is false. That is, you may claim not to see an effect that is really there. This is called a Type II error. If the null hypothesis is, in fact, false, then the probability of committing a Type II error is called beta. The ability of a hypothesis test to find an effect that is really there is called the power of the test and is equal to 1-beta. If you decrease the alpha level of a test in order to avoid making a Type I error, you will generally increase beta and, therefore, decrease the power of the test to find an effect that really is there. You can't have it both ways. Type I and Type II errors are generally traded off.
  • The most important thing you can do to increase the power of a test is to increase the sample size. Small sample sizes generally mean small power. Sample size DOES NOT affect the Type I error rate. You are NOT more likely to make a Type I error because of a small sample size.
  • If you conduct more than one hypothesis test at alpha=.05, the over all (or "familywise") Type I error rate obeys the simple laws of probability. The more tests you conduct, the more likely you are to commit a Type I error on at least one of them. If you do 20 tests and find only 1 signficant difference, that one is very likely a Type I error.

One more thing. Write this on the back of your hand if you have to, because this is an UNACCEPTABLE mistake at this level. You reject the null hypothesis, and conclude in favor of the alternative hypothesis, when the obtained p-value is LESS THAN alpha, i.e., generally less than .05. When the obtained p-value is greater than alpha, you fail to reject the null hypothesis. (In this class, you may also say "accept the null hypothesis," although that is generally considered bad form. Just remember that accepting the null hypothesis is not the same as concluding that the null hypothesis is true, just that we could not reject it. What you may NEVER say--without losing BIG points--is that the null hypothesis was proven.)

Return To Main Page