The uniform distribution is alternately known as the de Moivre distribution, in honor of the French mathematician Abraham de Moivre (1667-1754) who introduced it to probability theory. The fundamental assumption behind the uniform distribution is that none of the possible outcomes is more or less likely than any other. The uniform distribution applies to continuous random variables, i.e., variables that can assume any values within a specified range.***
Let us say that a given random variable X is uniformly distributed over the interval from a to b. That is, the smallest value X can assume is a and the largest value it can assume is b. To determine the probability density function (pdf) of such a random variable, we need only remember that the total area under the graph of the pdf must equal 1. Since the pdf is constant throughout the interval on which X can assume values, the area underneath its graph is that of a rectangle — which can be determined by multiplying its base by its height. But we know the base of the rectangle to be (b-a), the width of the interval over which the random variable is distributed, and its area to be 1. Thus, the height of the rectangle must be 1/(b-a), which is also the probability density function of a uniform random variable over the region from a to b.
What is the mean of a uniformly distributed random variable? It is, conveniently, the halfway point of the interval from a to b, since half of the entire area under the graph of the pdf will be to the right of such a midway point, and half will be to the left. So the mean or mathematical expectation of a uniformly distributed random variable is (b-a)/2.
It is also possible to arrive at a convenient formula for the variance of such a uniform variable. Let us consider the following equation used for determining variance:
Var(X) = E(X2) – E(X)2 , where X is our uniformly distributed random variable.
We already know that E(X) = (b-a)/2, so E(X)2 must equal (b-a)2/4. To find E(X2), we can use the definition of such an expectation as the definite integral of x2*f(x) evaluated from b to a, where f(x) is the pdf of our random variable. We already know that f(x) = 1/(b-a); so E(X2) is equal to the integral of x2/(b-a), or x3/3(b-a), evaluated from b to a, which becomes (b-a)3/3(b-a), or (b-a)2/3.
Thus, Var(X) = E(X2) – E(X)2 = (b-a)2/3 – (b-a)2/4 = (b-a)2/12, which is the variance for any uniformly distributed random variable.