Statistics
Textbooks
Boundless Statistics
Other Hypothesis Tests
Nonparametric Statistics
Statistics Textbooks Boundless Statistics Other Hypothesis Tests Nonparametric Statistics
Statistics Textbooks Boundless Statistics Other Hypothesis Tests
Statistics Textbooks Boundless Statistics
Statistics Textbooks
Statistics
Concept Version 7
Created by Boundless

Single-Population Inferences

Two notable nonparametric methods of making inferences about single populations are bootstrapping and the Anderson–Darling test.

Learning Objective

  • Contrast bootstrapping and the Anderson–Darling test for making inferences about single populations


Key Points

    • Bootstrapping is a method for assigning measures of accuracy to sample estimates.
    • More specifically, bootstrapping is the practice of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution.
    • The bootstrap works by treating inference of the true probability distribution $J$, given the original data, as being analogous to inference of the empirical distribution of $\hat{J}$, given the resampled data.
    • The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution.
    • In its basic form, the test assumes that there are no parameters to be estimated in the distribution being tested, in which case the test and its set of critical values is distribution-free.
    • $K$-sample Anderson–Darling tests are available for testing whether several collections of observations can be modeled as coming from a single population.

Terms

  • bootstrap

    any method or instance of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution

  • uniform distribution

    a family of symmetric probability distributions such that, for each member of the family, all intervals of the same length on the distribution's support are equally probable


Full Text

Two notable nonparametric methods of making inferences about single populations are bootstrapping and the Anderson–Darling test.

Bootstrapping

Bootstrapping is a method for assigning measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using only very simple methods.

More specifically, bootstrapping is the practice of estimating properties of an estimator (such as its variance) by measuring those properties when sampling from an approximating distribution. One standard choice for an approximating distribution is the empirical distribution of the observed data. In the case where a set of observations can be assumed to be from an independent and identically distributed population, this can be implemented by constructing a number of resamples of the observed dataset (and of equal size to the observed dataset), each of which is obtained by random sampling with replacement from the original dataset.

Bootstrapping may also be used for constructing hypothesis tests. It is often used as an alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors.

Approach

The bootstrap works by treating inference of the true probability distribution $J$, given the original data, as being analogous to inference of the empirical distribution of $\hat{J}$, given the resampled data. The accuracy of inferences regarding $\hat{J}$ using the resampled data can be assessed because we know $\hat{J}$. If $\hat{J}$ is a reasonable approximation to $J$, then the quality of inference on $J$ can, in turn, be inferred.

As an example, assume we are interested in the average (or mean) height of people worldwide. We cannot measure all the people in the global population, so instead we sample only a tiny part of it, and measure that. Assume the sample is of size $N$; that is, we measure the heights of $N$ individuals. From that single sample, only one value of the mean can be obtained. In order to reason about the population, we need some sense of the variability of the mean that we have computed.

The simplest bootstrap method involves taking the original data set of $N$ heights, and, using a computer, sampling from it to form a new sample (called a 'resample' or bootstrap sample) that is also of size $N$. The bootstrap sample is taken from the original using sampling with replacement so it is not identical with the original "real" sample. This process is repeated a large number of times, and for each of these bootstrap samples we compute its mean. We now have a histogram of bootstrap means. This provides an estimate of the shape of the distribution of the mean, from which we can answer questions about how much the mean varies.

Situations where bootstrapping is useful include:

  • When the theoretical distribution of a statistic of interest is complicated or unknown.
  • When the sample size is insufficient for straightforward statistical inference.
  • When power calculations have to be performed, and a small pilot sample is available.

A great advantage of bootstrap is its simplicity. It is a straightforward way to derive estimates of standard errors and confidence intervals for complex estimators of complex parameters of the distribution, such as percentile points, proportions, odds ratio, and correlation coefficients. Moreover, it is an appropriate way to control and check the stability of the results.

However, although bootstrapping is (under some conditions) asymptotically consistent, it does not provide general finite-sample guarantees. The apparent simplicity may conceal the fact that important assumptions are being made when undertaking the bootstrap analysis (e.g. independence of samples) where these would be more formally stated in other approaches.

The Anderson–Darling Test

The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution. In its basic form, the test assumes that there are no parameters to be estimated in the distribution being tested, in which case the test and its set of critical values is distribution-free. $K$-sample Anderson–Darling tests are available for testing whether several collections of observations can be modeled as coming from a single population, where the distribution function does not have to be specified.

The Anderson–Darling test assesses whether a sample comes from a specified distribution. It makes use of the fact that, when given a hypothesized underlying distribution and assuming the data does arise from this distribution, the data can be transformed to a uniform distribution. The transformed sample data can be then tested for uniformity with a distance test. The formula for the test statistic $A$ to assess if data $\{ Y_1 < \dots, n \}$ comes from a distribution with cumulative distribution function (CDF) $F$ is:

$A^2 = -n - S$

where

$\displaystyle{S= \sum_{k=1}^n \frac{2k-1}{n} \left[ \ln (F (Y_k) ) + \ln ( 1- F ( Y_{n+1-k})) \right]}$

The test statistic can then be compared against the critical values of the theoretical distribution. Note that in this case no parameters are estimated in relation to the distribution function $F$.

[ edit ]
Edit this content
Prev Concept
Sign Test
Comparing Two Populations: Independent Samples
Next Concept
Subjects
  • Accounting
  • Algebra
  • Art History
  • Biology
  • Business
  • Calculus
  • Chemistry
  • Communications
  • Economics
  • Finance
  • Management
  • Marketing
  • Microbiology
  • Physics
  • Physiology
  • Political Science
  • Psychology
  • Sociology
  • Statistics
  • U.S. History
  • World History
  • Writing

Except where noted, content and user contributions on this site are licensed under CC BY-SA 4.0 with attribution required.