Akaike information criterion

(noun)

a measure of the relative quality of a statistical model, for a given set of data, that deals with the trade-off between the complexity of the model and the goodness of fit of the model

Related Terms

  • Bayesian information criterion
  • Bonferroni point

Examples of Akaike information criterion in the following topics:

  • Stepwise Regression

    • Usually, this takes the form of a sequence of $F$-tests; however, other techniques are possible, such as $t$-tests, adjusted $R$-square, Akaike information criterion, Bayesian information criterion, Mallows's $C_p$, or false discovery rate.
    • Forward selection involves starting with no variables in the model, testing the addition of each variable using a chosen model comparison criterion, adding the variable (if any) that improves the model the most, and repeating this process until none improves the model.
    • Backward elimination involves starting with all candidate variables, testing the deletion of each variable using a chosen model comparison criterion, deleting the variable (if any) that improves the model the most by being deleted, and repeating this process until no further improvement is possible.
    • This problem can be mitigated if the criterion for adding (or deleting) a variable is stiff enough.
  • Statistical Power

    • The Statistical Significance Criterion Used in the Test: A significance criterion is a statement of how unlikely a positive result must be, if the null hypothesis of no effect is true, for the null hypothesis to be rejected.
    • One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion, for example 0.10 instead of 0.05.
    • An unstandardized (direct) effect size will rarely be sufficient to determine the power, as it does not contain information about the variability in the measurements.
    • Let's say we look for a significance criterion of 0.05.
  • Introduction to Multiple Regression

    • In simple linear regression, a criterion variable is predicted from one predictor variable.
    • In multiple regression, the criterion is predicted by two or more variables.
    • In multiple regression, it is often informative to partition the sums of squares explained among the predictor variables.
    • Specifically, they are the differences between the actual scores on the criterion and the predicted scores.
    • It is assumed that the relationship between each predictor variable and the criterion variable is linear.
  • Defining conditional probability

    • It is useful to think of the condition as information we know to be true, and this information usually can be described as a known outcome or event.
    • Suppose we were provided only the information in Table 2.13 on the preceding page, i.e. only probability data.
    • Then if we took a sample of 1000 people, we would anticipate about 47% or 0.47 × 1000 = 470 would meet our information criterion.
    • Similarly, we would expect about 28% or 0.28 × 1000 = 280 to meet both the information criterion and represent our outcome of interest.
    • The complement still appears to work when conditioning on the same information.
  • Glossary

    • In general, the criterion variable is the dependent variable.
    • The degrees of freedom of an estimate is the number of independent pieces of information that go into the estimate.
    • Two variables are said to be independent if the value of one variable provides no information about the value of the other variable.
    • In multiple regression, the criterion is predicted from two or more predictor variables.
    • In the following example, the criterion (Y) is predicted by X, X2 and, X3.
  • Remarks on the Concept of “Probability”

    • This is a natural idea but nonetheless unreasonable if we have further information relevant to whether it will rain tomorrow.
    • Two people might attach different probabilities to the election outcome, yet there would be no criterion for calling one "right" and the other "wrong."
  • Inferential Statistics for b and r

    • The column X has the values of the predictor variable and the column Y has the values of the criterion variable.
    • We now have all the information to compute the standard error of b:
  • Measurement

    • Finally, if a test is being used to select students for college admission or employees for jobs, the higher the reliability of the test the stronger will be the relationship to the criterion.
    • Items that are either too easy so that almost everyone gets them correct or too difficult so that almost no one gets them correct are not good items: they provide very little information.
  • Hypothesis testing exercises

    • (e) If your hypothesis test and confidence interval suggest a significant increase in revenue per customer, why might you still not recommend that the restaurant owner extend the happy hour based on this criterion?
    • No information is provided about the skew.
    • (d) There is not enough information to tell.
    • No information is provided about the skew, so this is another item we would typically ask about.
Subjects
  • Accounting
  • Algebra
  • Art History
  • Biology
  • Business
  • Calculus
  • Chemistry
  • Communications
  • Economics
  • Finance
  • Management
  • Marketing
  • Microbiology
  • Physics
  • Physiology
  • Political Science
  • Psychology
  • Sociology
  • Statistics
  • U.S. History
  • World History
  • Writing

Except where noted, content and user contributions on this site are licensed under CC BY-SA 4.0 with attribution required.