successive approximation

(noun)

An increasingly accurate estimate of a response desired by a trainer.

Related Terms

  • paradigm
  • shaping

Examples of successive approximation in the following topics:

  • Shaping

    • Shaping is a method of operant conditioning by which successive approximations of a target behavior are reinforced.
    • Instead of rewarding only the target, or desired, behavior, the process of shaping involves the reinforcement of successive approximations of the target behavior.
    • Then, the trainer rewards a behavior that is one step closer, or one successive approximation nearer, to the target behavior.
    • For example, Skinner would reward the rat for taking a step toward the lever, for standing on its hind legs, and for touching the lever—all of which were successive approximations toward the target behavior of pressing the lever.
    • Continue to reinforce closer and closer approximations of the target behavior.
  • Newton's Method

    • Newton's Method is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function.
    • In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (or zeroes) of a real-valued function.
    • Provided the function satisfies all the assumptions made in the derivation of the formula, a better approximation x1 is x0 - f(x0) / f'(x0).
    • We see that $x_{n+1}$ is a better approximation than $x_n$ for the root $x$ of the function $f$.
    • Use "Newton's Method" to find successively more accurate estimates for a function's $x$-intercept
  • Normal approximation to the binomial distribution

    • Approximately 20% of the US population smokes cigarettes.
    • Here we consider the binomial model when the probability of a success is p = 0.10.
    • The binomial distribution with probability of success p is nearly normal when the sample size n is sufficiently large that np and n(1−p) are both at least 10.
    • The normal approximation may be used when computing the range of many possible successes.
    • Your answer should be approximately equal to the solution of Example 3.50: 0.0041.
  • Geometric distribution

    • What about if it takes her n−1 individuals who will administer the worst shock before finding her first success, i.e. the first success is on the nth person?
    • If the first success is on the nth person, then there are n−1 failures and finally 1 success, which corresponds to the probability (0.65)n−1(0.35).
    • If the probability of a success is high (e.g. 0.8), then we don't usually wait very long for a success: 1/0.8 = 1.25 trials on average.
    • If the probability of a success is low (e.g. 0.1), then we would expect to view many trials before we see a success: 1/0.1 = 10 trials.
    • The geometric distribution is always right skewed and can never be well-approximated by the normal model.
  • Assumption

    • Your data should be a simple random sample that comes from a population that is approximately normally distributed.
    • You use the sample standard deviation to approximate the population standard deviation.
    • (Note that if the sample size is sufficiently large, a t-test will work even if the population is not approximately normally distributed).
    • You must meet the conditions for a binomial distribution which are there are a certain number n of independent trials, the outcomes of any trial are success or failure, and each trial has the same probability of a success p.
    • Then the binomial distribution of sample (estimated) proportion can be approximated by the normal distribution with µ = p and $\sigma=\sqrt{\frac{p \cdot q}{n}}$.
  • Summary of Functions

    • X = the number of independent trials until the first success (count the failures and the first success)
    • The mean µ is typically given. ( λ is often used as the mean instead of µ. ) When the Poisson is used to approximate the binomial, we use the binomial mean µ = np. n is the binomial number of trials. p = the probability of a success for each trial.
    • If n is large enough and p is small enough then the Poisson approximates the binomial very well.
  • Ecological Succession

    • When disturbances occur, succession allows for communities to become re-established over periods of time.
    • In primary succession, newly-exposed or newly-formed land is colonized by living things.
    • On the Big Island, approximately 32 acres of land are added each year.
    • During primary succession in lava on Maui, Hawaii, succulent plants are the pioneer species.
    • Secondary succession is shown in an oak and hickory forest after a forest fire.
  • Bernoulli distribution

    • Over the years, additional research suggested this number is approximately consistent across communities and time.
    • We label a person a success if she refuses to administer the worst shock.
    • Thus, success or failure is recorded for each person in the study.
    • We may also denote a success by 1 and a failure by 0.
    • Bernoulli random variables are often denoted as 1 for a success and 0 for a failure.
  • The Binomial Formula

    • Defining a head as a "success," Table 1 shows the probability of 0, 1, and 2 successes for two trials (flips) for an event that has a probability of 0.5 of being a success on each trial.
    • In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of $n$ independent yes/no experiments, each of which yields success with probability $p$.
    • However, for $N$ much larger than $n$, the binomial distribution is a good approximation, and widely used.
    • The formula can be understood as follows: We want $k$ successes ($p^k$) and $n-k$ failures ($(1-p)^{n-k}$); however, the $k$ successes can occur anywhere among the $n$ trials, and there are $C(n, k)$ different ways of distributing $k$ successes in a sequence of $n$ trials.
    • When $n$ is relatively large (say at least 30), the Central Limit Theorem implies that the binomial distribution is well-approximated by the corresponding normal density function with parameters $\mu = np$ and $\sigma = \sqrt{npq}$.
  • Sampling Distribution of p

    • The binomial distribution is the distribution of the total number of successes (favoring Candidate A, for example), whereas the distribution of p is the distribution of the mean number of successes.
    • Therefore, the sampling distribution of p and the binomial distribution differ in that p is the mean of the scores (0.70) and the binomial distribution is dealing with the total number of successes (7).
    • The sampling distribution of p is approximately normally distributed if N is fairly large and π is not close to 0 or 1.
    • A rule of thumb is that the approximation is good if both Nπ and N(1 - π) are greater than 10.
    • Note that even though N(1 - π) is only 4, the approximation is quite good.
Subjects
  • Accounting
  • Algebra
  • Art History
  • Biology
  • Business
  • Calculus
  • Chemistry
  • Communications
  • Economics
  • Finance
  • Management
  • Marketing
  • Microbiology
  • Physics
  • Physiology
  • Political Science
  • Psychology
  • Sociology
  • Statistics
  • U.S. History
  • World History
  • Writing

Except where noted, content and user contributions on this site are licensed under CC BY-SA 4.0 with attribution required.