Sunday, January 01, 2006

Generating Gaussian random numbers

In the process of simulating the real systems with stochastic properties, or in the case of design of randomized algorithms, we often want to assume that the random sequences that drive the process have probability distribution other than uniform. The first distribution that comes to one's mind is the Gaussian distribution (for example it properly represents the noise that occur in comunication channels, or it can represent the maximum-entropy distribution, in information-theoretic algorithm analysys).

The question is: given a proper generator of uniformly distributed numbers, how does on produce the numbers with Gaussian distribution? The simple and elegant idea is to use the Central Limit Theorem, which states : Let Xn (n=1..N), be the array of independend identically distributed random numbers ith finite mean value (m) and variance(o^2). Then the random variable, given by : (X1+...X2-n*m)/o^2*sqrt(2),
converges in distribution o the Gaussian probability distribution N(m,o^2).
(simply stated : The variable that is a sum of uniformly generated random variables
has Gaussian distribution).This provides us with an simple solution for generating the random numbers with normal N(0,1)(m=0, o=1) distribution. We generate 12 uniformly distributed random numbers and create the new variable which is a sum of generated variables, which gives us a Gaussian variable, with m=6. Finally, we subtrach 6 from such variable, and we get a N(0,1) distributed variable. The picture shows the result of simulation, for 1000 generated random numbers:

Of course, generating Gaussian random numbers in this manner is not computationally efficient, beacouse we need to generate 12 uniform numbers, for only 1 Gaussian, therefore the efficient methods are derived (most noteably, the Box-Muller method)


Post a Comment

<< Home