HOME»Workshop»Articles»Chi Squared

The Chi-Squared Statistic

by J. M. Haile

Many statistical parameters were created to characterize measured values, such as quality of data, goodness of fits, significance of correlations, etc. But a few apply not to measurements themselves, but to the frequency of their occurrence. The chi-squared statistic is one.

Chi-squared is used to help us decide the extent to which two discrete frequency distributions are the same. Often one distribution is measured and the other is theoretically expected. Then chi-squared is defined by [1, 2]

χ² = ΣiM (Ni – ni)²/ni

(1)

where M is the number of possible outcomes for one event, Ni is the number of times we observe the particular outcome i, and ni is the number of times we expect outcome i to be observed.

If the event is to flip a coin, then there are two possible outcomes, so M = 2. However, if the event is to throw a die, then M = 6. We obtain values for the Ni in (1) by initiating a total of K events and counting the number of times each outcome i is observed. Values for the expected numbers ni are usually obtained by estimating the probability pi for observing outcome i and computing the expectation value from

ni = pi K

(2)

and then (1) can be written as

χ² = ΣiM(Ni – pi K)²/(pi K)

(3)

In most situations the M possible outcomes are mutually exclusive, then the probabilities must sum to unity,

ΣiMpi = 1

(4)

and

ΣiMNi = K

(5)

so (3) reduces to

χ² = (ΣiM(Ni)²/(pi K)) – K

(6)

When χ² = 0, then the observed distribution coincides with the expected one. Otherwise, χ² > 0 and it is not bounded from above. Usually, we want χ² small. It is an appropriate measure under these conditions [2]:

  1. The measured and expected distributions are discrete, not continuous.
  2. Each event is independent of all other events.
  3. The number of observed events is large; this is commonly interpreted to mean that the smallest number of observed events satisfies Ni > 5.
Example

We have two coins; one is a fair coin, so flipping it gives heads or tails with equal probability (p = 0.5). But the other coin is weighted so that flipping it gives heads with probability p = 0.8. We cannot visually distinguish between the two, but we need to identify the biased coin. To do so, we pick one coin, flip it fifty times, and record the number of heads and tails we observe. With these data we compute χ² twice: once assuming the coin is fair, a second time assuming it is biased. We use the smaller value of χ² to decide whether the coin is fair or biased.

For each coin, there are only two possible outcomes: either a head or a tail. So the sum in (3) has only two terms. For the fair coin with p = 0.5, (3) simplifies to

χ² = 2(Nh – 25)²/25

(7)

where Nh is the number of heads we observe in fifty flips of the coin. For the biased coin with p = 0.8, (3) simplifies to

χ² = (Nh – 40)²/8

(8)

Table 1 gives some possible results for fifty flips of one coin. The table indicates that if we flip one of the coins 50 times and get more than 33 heads, then that coin is probably the biased one; however, if we get 33 or fewer heads, then that coin is probably the fair one. We do not consider the possibilities Nh < 5 or Nh > 45 (hence Ntails < 5), because those situations would violate the third rule-of-thumb under (6).


Table 1.
Use of chi-squared to decide which of two coins is biased. We flip one coin 50 times and base our conclusion on the number of heads Nh obtained.

Nh χ²
(p=0.5) Eq. (7)
χ²
(p=0.8) Eq. (8)
Conclusion
45323.1biased coin
40180biased coin
3583.1biased coin
335.16.1fair coin
30212.5fair coin
25028fair coin
20250fair coin
15878fair coin
1018112fair coin
532153fair coin


If we generalize the above analysis so that K is any number of flips of one coin, then we find that whenever

Nh > 2K/3

(9)

then the coin is probably biased. (We caution that (9) applies only when one coin is fair and the other has an 80% probability of showing heads.)

How Confident Can We Be About Our Conclusion?

The results in the Table 1 do not guarantee that we would correctly identify the coin; all we can say is that our conclusion is more likely than the alternative, although the larger the difference between the two chi-squared values, the more likely our conclusion would be. However, we can quantify how much "more likely" our conclusion is relative to the alternative. This can be done because the distribution for chi-square itself is known, when the number of events studied is very large.

The χ² distribution depends on the number of "degrees of freedom, ν" that are available in the problem. The value of ν is the number of possible outcomes M minus any constraints imposed; often the only constraint is that the probabilities for all outcomes must sum to unity, as in (4). Then [2]

ν = M – 1

(10)

But in some situations additional constraints apply; for example, we may know the value for the mean of the sampled distribution. In any case, ν decreases by unity for each constraint.

It is conventional to characterize the χ² distribution in terms of its percentiles for particular numbers of degrees of freedom. A sample is given in Table 2. The body of the table contains values of χ² for particular values of ν and particular percentiles p. A value of the percentile p represents the fractional area under the distribution up to a certain value of χ², as in Figure 1. Then, for example, the 20th percentile (p=0.2) for a problem having ν = 1 and χ² = 0.064 implies that in 20 experiments out of 100, we expect the observed value for χ² will be less than 0.064; hence, in the other 80 experiments, it should be greater than 0.064.


Table 2. Values of chi-squared at selected percentiles (p) and selected numbers of degrees of freedom (ν). Extracted from a larger table in [3].

p ν = 1 ν = 2 ν = 3 ν = 4 ν = 6 ν = 8 ν = 10
0.010.0000.0200.1150.2970.8721.652.56
0.050.0040.1030.3520.7111.642.733.94
0.10.0160.2110.5841.062.203.494.87
0.20.0640.4461.001.653.074.596.18
0.30.1480.7131.422.203.835.537.27
0.50.4551.392.373.365.357.349.34
0.71.072.413.664.887.239.5211.8
0.81.643.224.645.998.5611.013.4
0.92.714.616.257.7810.613.416.0
0.953.845.997.819.4912.615.518.3
0.996.639.2111.313.316.820.123.2



Chi-Squared Distribution

Figure 1. Sample chi distribution for a particular number of degrees of freedom.
At one chi-squared value χ² the pth percentile represents the area under the
distribution (shaded) when the total area is normalized to unity.

Let us apply the χ² distribution in Table 2 to our problem of identifying which of two coins is biased. Recall we pick one coin at random and flip it 50 times. Say we obtain heads 35 times. Then from Table 1 we conclude that this coin is the biased one. Now we ask, how confident should we be that this conclusion is correct?

In this situation we have only two possible outcomes to flipping the coin, and only one constraint (4), so (10) gives the number of degrees of freedom as ν = 1. If the flipped coin were the fair one, then for Nh = 35, Table 1 gives χ² = 8. From the χ² distribution in Table 2 with ν = 1, we see that the 99th percentile has χ² = 6.6. That is, the probability is only 1% that we would observe χ² > 6.6 (whereas we actually obtained χ² = 8).

But if the tested coin were the biased one, then Table 1 gives χ² = 3.12, and from Table 2 with ν = 1, we see that the 90th percentile has χ² = 2.71. That is, the probability is 10% that we would find χ² > 2.71 (whereas we actually obtained χ² = 3.12). Comparing 10% with 1% gives us some confidence that if a coin gives heads 35 times out of 50, then that coin is the biased one.

We caution that when the number of sampled events K is not very large, the chi-squared distribution in Table 2 is only a rough approximation. In fact, Knuth [2] shows that often the values in Table 2 are reliable to only one significant figure.

References

[1] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, Numerical Recipes, Cambridge University Press, New York, 1986, p. 470f.

[2] D. E. Knuth, The Art of Computer Programming, vol. 2, "Seminumerical Algorithms", 2nd ed., Addison-Wesley, Reading, MA, 1981, p. 39f.

[3] B. W. Lindgren and G. W. McElrath, Introduction to Probability and Statistics, Macmillan, New York, 1959, p. 256.