##
###
Data interpretation: using probability

Gordon B. Drummond, Sarah L. Vowler

2011
*
Advances in Physiology Education
*

Key points • Ensure that a sample is random • Use observations of a sample to judge the features of the population • Plan the study: this includes the appropriate analysis • Establish a hypothesis: usually that there is no difference • Estimate the probability that the observed data could have occurred by chance • Consider the probabilities of more extreme data as well • If you find 'no difference' this is no DETECTABLE difference • Absence of evidence is NOT evidence of absence EXPERIMENTAL
## more »

... nce EXPERIMENTAL DATA are analysed statistically to allow us to draw conclusions from a limited set of measurements. The hard fact is that we can never be certain that measurements from a sample will exactly reflect the properties of the entire group of possible candidates available to be studied (although using a sample is often the only practical thing to do). It's possible that some scientists are not even clear that the word "sample" has a special meaning in statistics, or understand the importance of taking an unbiased sample. Some may consider a "sample" to be something like the first ten leeches that come out of a jar! If we have taken care to obtain a truly random or a representative sample from a large number of possible individuals, we can use this unbiased sample to judge the possibility that our observations support a particular hypothesis. Statistical analysis allows the strength of this possibility to be estimated. Since it's not completely certain, the converse of this likelihood shows the uncertainty that remains. Scientists are better at dealing with "uncertainty" than the popular press, but many are still swayed by "magical" cut-off values for P values, such as 0.05, below which hypotheses are considered (supposedly) proven, forgetting that probability is measured on a continuum and is not dichotomous. Words can betray, and often cannot provide sufficient nuances to describe effects which can be indistinct or fuzzy (3). Indeed, many of the words we use such as significance, likelihood and probability, and conclusions such as "no effect", should be used guardedly to avoid mistakes. There are also differences of opinion between statisticians: some statisticians are more theoretical and others more pragmatic. Some of the different approaches used for statistical inference are hard for the novice to grasp. Although a full mathematical understanding is not necessary for most researchers, it is vital to have sufficient understanding of the basic principles behind the statistical approaches adopted. This avoids merely treating statistical tests as if they were a fire appliance, to pick up when smoking data need to be dealt with, and vaguely hoping you have got the correct type. Better to know how the data should be properly analysed (as it is to know which extinguisher works best). The wrong statistical approach could be like using water on an electrical fire! Ideally, the appropriate method of analysis should be anticipated, because it should have been considered when the study was set up. A properly designed study that aims to answer specific questions will have defined outcomes of interest at the outset, before data collection has started. These questions are then recast as hypotheses that need to be tested. We use the collected measurements on an appropriate outcome to test how probable these observations would have been if a particular hypothesis of interest is correct. Assuming that the reader followed our previous instructions to properly display the data obtained (2) we hope that a review of these data displays will confirm the planned analysis that is to be used, or suggest alternatives. As an example, and with no apology for a basic approach, we shall explain the principles of statistical inference in a simple example involving probability, the bedrock of statistical analysis. We hope that the example will be sufficiently concrete to allow insight into some of the concepts, such as significance, effect size, and power. More specific and practical aspects will be addressed later in the series. Suppose we set up a very simple experiment to find out if a flu virus is more lethal in one strain of cell than another. We have 20 A cells and 20 B cells to study. We assume that the cells chosen are representative of A and B cells in general. We infect these 40 cells with the virus. We find that 8 cells out of 40 die (Figure 1) . We start the analysis with a hypothesis: that the probability of death after infection is equal for each strain. The hypothesis is of independence (i.e. no association) between death and strain. It also corresponds to the null hypothesis that there is no difference in the capacity of the virus to kill A and B cells. This hypothesis lets us calculate the probability of observing a number of potential results from our study and also predict what would be found if the virus were equally lethal in A cells and B cells. For instance, given that there are a total of 8 deaths, we could predict, under this null hypothesis, that the likely splits of these 8 deaths could be 4 dead A cells and 4 dead B cells, or maybe even 5 and 3. However, we discover that 6 of the dead cells are strain B. Is this finding evidence that the mortality rates are different? Are A cells more resistant? Or is this just chance at work? If we wished to observe how chance works we could toss a fair coin eight times to predict the strain of the dead cells This article is covered by non-exclusive license and is being simultaneously

doi:10.1152/advan.00023.2011
pmid:21652497
fatcat:xux2seoajzh6bgfzqyu2d4rjua