Statistics
Textbooks
Boundless Statistics
Statistics Textbooks Boundless Statistics
Statistics Textbooks
Statistics

Chapter 13

Other Hypothesis Tests

Book Version 1
By Boundless
Boundless Statistics
Statistics
by Boundless
View the full table of contents
Section 1
The t-Test
Thumbnail
The t-Test

A t-test is any statistical hypothesis test in which the test statistic follows a Student's t-distribution if the null hypothesis is supported.

Thumbnail
The t-Distribution

Student's $t$-distribution arises in estimation problems where the goal is to estimate an unknown parameter when the data are observed with additive errors.

Assumptions

Assumptions of a $t$-test depend on the population being studied and on how the data are sampled.

t-Test for One Sample

The $t$-test is the most powerful parametric test for calculating the significance of a small sample mean.

Thumbnail
t-Test for Two Samples: Independent and Overlapping

Two-sample t-tests for a difference in mean involve independent samples, paired samples, and overlapping samples.

Thumbnail
t-Test for Two Samples: Paired

Paired-samples $t$-tests typically consist of a sample of matched pairs of similar units, or one group of units that has been tested twice.

Calculations for the t-Test: One Sample

The following is a discussion on explicit expressions that can be used to carry out various $t$-tests.

Thumbnail
Calculations for the t-Test: Two Samples

The following is a discussion on explicit expressions that can be used to carry out various t-tests.

Multivariate Testing

Hotelling's $T$-square statistic allows for the testing of hypotheses on multiple (often correlated) measures within the same sample.

Alternatives to the t-Test

When the normality assumption does not hold, a nonparametric alternative to the $t$-test can often have better statistical power.

Thumbnail
Cohen's d

Cohen's $d$ is a method of estimating effect size in a $t$-test based on means or distances between/among means.

Section 2
The Chi-Squared Test
Categorical Data and the Multinomial Experiment

The multinomial experiment is the test of the null hypothesis that the parameters of a multinomial distribution equal specified values.

Structure of the Chi-Squared Test

The chi-square test is used to determine if a distribution of observed frequencies differs from the theoretical expected frequencies.

Thumbnail
How Fisher Used the Chi-Squared Test

Fisher's exact test is preferable to a chi-square test when sample sizes are small, or the data are very unequally distributed.

Thumbnail
Goodness of Fit

The goodness of fit test determines whether the data "fit" a particular distribution or not.

Thumbnail
Inferences of Correlation and Regression

The chi-square test of association allows us to evaluate associations (or correlations) between categorical data.

Example: Test for Goodness of Fit

The Chi-square test for goodness of fit compares the expected and observed values to determine how well an experimenter's predictions fit the data.

Thumbnail
Example: Test for Independence

The chi-square test for independence is used to determine the relationship between two variables of a sample.

Section 3
Tests for Ranked Data
Thumbnail
When to Use These Tests

"Ranking" refers to the data transformation in which numerical or ordinal values are replaced by their rank when the data are sorted.

Mann-Whitney U-Test

The Mann–Whitney $U$-test is a non-parametric test of the null hypothesis that two populations are the same against an alternative hypothesis.

Wilcoxon t-Test

The Wilcoxon $t$-test assesses whether population mean ranks differ for two related samples, matched samples, or repeated measurements on a single sample.

Kruskal-Wallis H-Test

The Kruskal–Wallis one-way analysis of variance by ranks is a non-parametric method for testing whether samples originate from the same distribution.

Section 4
Nonparametric Statistics
Thumbnail
Distribution-Free Tests

Distribution-free tests are hypothesis tests that make no assumptions about the probability distributions of the variables being assessed.

Thumbnail
Sign Test

The sign test can be used to test the hypothesis that there is "no difference in medians" between the continuous distributions of two random variables.

Single-Population Inferences

Two notable nonparametric methods of making inferences about single populations are bootstrapping and the Anderson–Darling test.

Comparing Two Populations: Independent Samples

Nonparametric independent samples tests include Spearman's and the Kendall tau rank correlation coefficients, the Kruskal–Wallis ANOVA, and the runs test.

Thumbnail
Comparing Two Populations: Paired Difference Experiment

McNemar's test is applied to $2 \times 2$ contingency tables with matched pairs of subjects to determine whether the row and column marginal frequencies are equal.

Thumbnail
Comparing Three or More Populations: Randomized Block Design

Nonparametric methods using randomized block design include Cochran's $Q$ test and Friedman's test.

Rank Correlation

A rank correlation is any of several statistics that measure the relationship between rankings.

You are in this book
Boundless Statistics by Boundless
Previous Chapter
Chapter 12
Estimation and Hypothesis Testing
  • Estimation
  • Statistical Power
  • Comparing More than Two Means
  • Confidence Intervals
  • Hypothesis Testing: One Sample
and 5 more sections...
Current Chapter
Chapter 13
Other Hypothesis Tests
  • The t-Test
  • The Chi-Squared Test
  • Tests for Ranked Data
  • Nonparametric Statistics
Next Chapter
Chapter 14
A Closer Look at Tests of Significance
  • Which Test?
  • A Closer Look at Tests of Significance
Subjects
  • Accounting
  • Algebra
  • Art History
  • Biology
  • Business
  • Calculus
  • Chemistry
  • Communications
  • Economics
  • Finance
  • Management
  • Marketing
  • Microbiology
  • Physics
  • Physiology
  • Political Science
  • Psychology
  • Sociology
  • Statistics
  • U.S. History
  • World History
  • Writing

Except where noted, content and user contributions on this site are licensed under CC BY-SA 4.0 with attribution required.