You are working with the text-only light edition of "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8". Click here for further information. |
Table of Contents Univariate Data Distributions Central Limit Theorem | |
See also: distributions, Normal Distribution |
Generally speaking, central limit theorems are a set of weak-convergence results in probability theory. Intuitively, they all express the fact that any sum of many independent identically distributed random variables will tend to be distributed according to a particular "attractor distribution". The most important and famous result is simply called the Central Limit Theorem which states that if the (independent) variables have a finite variance then the sum of these variables will show a normal distribution. Since many real processes yield distributions with finite variance, this explains the ubiquity of the normal distribution.
This simulation shows the consequences of the central limit theorem, which is considered to be one of the most important results in statistical theory:
The minimum size of a random sample for obtaining normally distributed means depends on the distribution function of the population. In general, n has to be larger for highly skewed distribution functions. For n greater
than 30 the sampled population will be normally distributed for most distribution functions.
Hint: A common trick to numerically create a normally distributed random variable is to draw 16 numbers of a uniform symmetric distribution and divide the mean by 4. This trick is based on the consequences of the central limit theorem.
Last Update: 2005-Aug-29