Sacred Texts Parapsychology Index Previous Next

Buy this Book at Amazon.com

*Extra-Sensory Perception*, by J. B. Rhine, [1934], at sacred-texts.com

From the beginning of the scientific period of parapsychology, the subject has had the aid of mathematical methods in its technique of evaluation. Professor Richet first introduced the mathematics of probability into this field in his treatment of the results of his earlier work on "suggestion mentale" or "telepathy", in 1884. 2 And since then the names of Edgeworth and R. A. Fisher of England and of Hawkesworth in America have appeared frequently in connection with probability estimation in the parapsychic branch of the field.

I am no mathematician and must rely upon methods already developed, when they can be found. But in this work it is fortunately possible to make experimental method conform to easy computation of significance of results and this I have done. I have been able, by adhering to the use of five simple card-figures, to keep the probability of success by pure chance at 1/5 for each trial. Where a straight run of consecutive

successes is to be evaluated for anti-chance probability, simply raising 1/5 to the power equal to the number of consecutive successes gives the value desired. This is established probability mathematics.

When, however, scattered successes are to be evaluated for anti-chance significance, the first step is to find the normal chance expectation. This is simply the number of trials (n) multiplied by the probability for success per trial (p), or np. With 1000 trials on 5-suit cards this would be 200. If more or fewer successes are obtained, the difference or deviation is found by subtraction. If 300 successes are given, there is then a positive deviation from np (or chance expectation) of 100. This can be evaluated in terms of percentages, if one merely desires to compare scoring-rates. It may be expressed as percentage of the number of trials (n), or of chance expectation (np) or, of course, as fractions of these. Here we would have a positive deviation that is 10% of n or 50% of np.

But in order to get a more general evaluation—*i.e.*, one that gives a value that measures the rate of scoring in conjunction with the number of trials at which such a rate holds—it is necessary to measure the deviation in relation to a standardized unit of probable deviation. The arbitrary unit I shall use here is the Probable Error (p.e.) which, in this situation, is that deviation from the mean (chance) expectation at which the odds are even (1:1) as to whether pure chance alone is operating or not. The deviation is then divided by the p.e., and the value D/p.e. or critical ratio is found. This is something of a more nearly absolute estimate of the anti-chance value of a given deviation than are percentage figures. Taking the data from Table LII of Gavett's "Statistical Method", 1 I shall cite the odds against chance for the smaller values of D/p.e. (Deviation divided by the probable error.)

D/p.e. |
Odds against a chance-theory. |

1 |
1 to 1 |

2 |
4.6 to 1 |

3 |
22 to 1 |

4 |
142 to 1 |

5 |
1,300 to 1 |

6 |
20,000 to 1 |

[paragraph continues] And adding higher figures adapted from R. A. Fisher's Table II, "Statistical Methods for Research Workers", 2 I get approximately,

7 |
100,000 to 1 |

8 |
nearly 1,000,000 to 1 |

9 |
over 100,000,000 to 1 |

[paragraph continues] Note the rapid rise of these figures for each unit of D/p.e. It is customary to accept a value of 4 as a significant D/p.e. This implies odds of 142 to 1 against a mere chance explanation.

In this report I shall use X to indicate D/p.e., which will be given for nearly every set of data reported (and all are reported). X, then, for any particular lot of data, is its "anti-chance index".

Now these values of X for particular groups of results have a progressive effect upon the mind. That is, if there are three groups, each with an X-value of 6, we can agree that these are more impressive than only one group with an X-value of 6. How much more? And how determine this? I have searched in vain for authority on this point, and have finally attempted a solution which I submit here and use in this report. It is tentatively offered and may be later rejected for a better method, if such is pointed out to me. I have made certain that this method errs, if it errs at all, on the safe side. And it is not at all necessary to any major issue of this report to use it. The reasons for using it are: first, there is needed an easy way of summating the "anti-chance" significance of many groups of results, instead of pooling them all together and getting the value of X after each addition through the report. But, second and more important, in such pooling together the results made by the high scorers are merged with perhaps a greater number of the poor scorers, so losing the greater contribution they made in the general assumption of equal distribution over the whole lot. A short series of 1000 trials by a good subject may well reach a higher figure for X than a poor scorer (only a little above mean expectation) over a series of 10,000 trials. For some purposes it is proper to pool these but for others it is proper to summate their joint effect against the chance-hypothesis by another method which gives proper weight to the scoring rates for each group. And, third, there is the reason that I have in some cases to deal with negative deviations, under conditions in which I tried to secure low results and succeeded. These, too, have their statistical significance and add, quite as well as the positive deviations, to the general weight of the conclusions. But if these were to be pooled with the totals, they would of course only detract from the total value. (Even this, however, would not at all destroy any of our conclusions, because of the large margin of safety.)

One may see the propriety of combining these values of X by remembering that each such value has a corresponding value (See Normal Probability Tables) representing the probability that the deviation it represents was due to chance alone; for example, for X = 3, this is 1/22; for X = 4, 1/142; for X = 6, 1/20,000. Now, three such values of X (for results given under conditions that permit generalization) can be combined by multiplying the three probability fractions and thus the total odds against chance be computed. (This is simple for low values but the needs of this report take in large values of X as well as small; and I have not found tables for the probabilities for large values of X.) Now, with the smaller

probability fraction thus arrived at one may obtain an equivalent value of X from the normal probability tables, if they extend that far. Working thus within the range of the tables available, it was found that the product of the probability fractions for a series of X-values came out roughly equal to the probability fraction for the square root of the sum of the squares of the X-values concerned. In each case, however, there was a lower X-value obtained by the formula X_{n} = √X^{2}_{A} + X^{2}_{B} + X^{2}_{C} than by the multiplication of the probability fractions. This is safe at least, if not exact.

I then reasoned in the following way for the deduction of a verifying procedure (for justifying the formula): each X is an independent value; it may represent a large number of trials with small deviation-rate or a shorter series with a higher rate, and vice versa. If we can find a way of checking the formula for combined values of X, it must hold for X's derived from large and small deviation rates, or large and small numbers of trials. That is, like the probability fraction which it represents, it is independently manipulable.

Now it appeared possible to check this formula's reliability in the following way: assuming equally distributed deviation-rates over a large number of trials, determine the X for the group as a whole (X_{n}); then divide the group into various subdivisions, large and small, and for each calculate the X-values; apply the formula to these to find X_{n} by this method in order to test it. I did so and found that it worked closely, yielding an Xn equal within a unit to that computed the other way, from the group as a whole. 1 If larger X-values can thus be calculated from smaller in this case (as was demonstrated) and if X-values are independently usable values (as they logically have to be), the method must stand as checked, to the extent of accuracy claimed, which is all that is needed for this work.

The formula has, therefore, been used in this report and is in any case safe from exaggerative effect on the general results. And it will, I hope, serve at the same time to raise the problem for those readers who may be on better terms with the "Queen of the Sciences".

31:2 Richet, Charles, *La Suggestion Mentale et le calcul des Probabilités*. Rev. Phil., 1884. For a full review in English see Gurney, *Proc*. S.P.R., II: pp. 239-256, 1884.

32:1 McGraw-Hill, New York, 1925, p. 180.

32:2 3rd Ed., Oliver and Boyd, London, 1930.

34:1 There is a similar practical check of the formula in Table XLIII, in the final chapter, in which the X-value is given for the results reported in the various chapters. That value for the results reported in Chapter 8 is almost the same for both ways of computing the X-value (81.9 for the formula. and 82.1 for the computation based on the pooling together of all the results). Now, here the evenness of distribution of scoring-rates for the five major subjects makes the pooling together do no violence to the resulting values. They would not have checked had the individual differences been great. Then the formula would have given the more correct value, as it does for the other chapters represented in Table XLIII.