Bayesian information criterion

(noun)

a criterion for model selection among a finite set of models that is based, in part, on the likelihood function

Related Terms

  • Akaike information criterion
  • Bonferroni point

Examples of Bayesian information criterion in the following topics:

  • Stepwise Regression

    • Usually, this takes the form of a sequence of $F$-tests; however, other techniques are possible, such as $t$-tests, adjusted $R$-square, Akaike information criterion, Bayesian information criterion, Mallows's $C_p$, or false discovery rate.
    • Forward selection involves starting with no variables in the model, testing the addition of each variable using a chosen model comparison criterion, adding the variable (if any) that improves the model the most, and repeating this process until none improves the model.
    • Backward elimination involves starting with all candidate variables, testing the deletion of each variable using a chosen model comparison criterion, deleting the variable (if any) that improves the model the most by being deleted, and repeating this process until no further improvement is possible.
    • This problem can be mitigated if the criterion for adding (or deleting) a variable is stiff enough.
  • What Is a Confidence Interval?

    • Bayesian inference provides further answers in the form of credible intervals.
    • Ostensibly, the Bayesian approach offers intervals that (subject to acceptance of an interpretation of "probability" as Bayesian probability) offer the interpretation that the specific interval calculated from a given dataset has a certain probability of including the true value (conditional on the data and other information available).
  • Statistical Power

    • The Statistical Significance Criterion Used in the Test: A significance criterion is a statement of how unlikely a positive result must be, if the null hypothesis of no effect is true, for the null hypothesis to be rejected.
    • One easy way to increase the power of a test is to carry out a less conservative test by using a larger significance criterion, for example 0.10 instead of 0.05.
    • An unstandardized (direct) effect size will rarely be sufficient to determine the power, as it does not contain information about the variability in the measurements.
    • Let's say we look for a significance criterion of 0.05.
  • Bayes' Theorem

    • Notice that we are given sufficient information to quickly compute the probability of testing positive if a woman has breast cancer (1.00 − 0.11 = 0.89).
    • Based on the information that the garage is full, there is a 56% probability that a sporting event is being held on campus that evening.
    • Using this information, compute P(no event | the lot is full).
    • This strategy of updating beliefs using Bayes' Theorem is actually the foundation of an entire section of statistics called Bayesian statistics.
    • While Bayesian statistics is very important and useful, we will not have time to cover much more of it in this book.
  • Introduction to Multiple Regression

    • In simple linear regression, a criterion variable is predicted from one predictor variable.
    • In multiple regression, the criterion is predicted by two or more variables.
    • In multiple regression, it is often informative to partition the sums of squares explained among the predictor variables.
    • Specifically, they are the differences between the actual scores on the criterion and the predicted scores.
    • It is assumed that the relationship between each predictor variable and the criterion variable is linear.
  • Defining conditional probability

    • It is useful to think of the condition as information we know to be true, and this information usually can be described as a known outcome or event.
    • Suppose we were provided only the information in Table 2.13 on the preceding page, i.e. only probability data.
    • Then if we took a sample of 1000 people, we would anticipate about 47% or 0.47 × 1000 = 470 would meet our information criterion.
    • Similarly, we would expect about 28% or 0.28 × 1000 = 280 to meet both the information criterion and represent our outcome of interest.
    • The complement still appears to work when conditioning on the same information.
  • Glossary

    • In general, the criterion variable is the dependent variable.
    • The degrees of freedom of an estimate is the number of independent pieces of information that go into the estimate.
    • Two variables are said to be independent if the value of one variable provides no information about the value of the other variable.
    • In multiple regression, the criterion is predicted from two or more predictor variables.
    • In the following example, the criterion (Y) is predicted by X, X2 and, X3.
  • Inferential Statistics for b and r

    • The column X has the values of the predictor variable and the column Y has the values of the criterion variable.
    • We now have all the information to compute the standard error of b:
  • Remarks on the Concept of “Probability”

    • This is a natural idea but nonetheless unreasonable if we have further information relevant to whether it will rain tomorrow.
    • Two people might attach different probabilities to the election outcome, yet there would be no criterion for calling one "right" and the other "wrong."
  • Measurement

    • Finally, if a test is being used to select students for college admission or employees for jobs, the higher the reliability of the test the stronger will be the relationship to the criterion.
    • Items that are either too easy so that almost everyone gets them correct or too difficult so that almost no one gets them correct are not good items: they provide very little information.
Subjects
  • Accounting
  • Algebra
  • Art History
  • Biology
  • Business
  • Calculus
  • Chemistry
  • Communications
  • Economics
  • Finance
  • Management
  • Marketing
  • Microbiology
  • Physics
  • Physiology
  • Political Science
  • Psychology
  • Sociology
  • Statistics
  • U.S. History
  • World History
  • Writing

Except where noted, content and user contributions on this site are licensed under CC BY-SA 4.0 with attribution required.