# Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

**G. Stolyarov II**

**Note from the Author:**

*This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007. The article earned over 4,800 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time. ****

*~ G. Stolyarov II, July 17, 2014*

The uniform distribution is alternately known as the de Moivre distribution, in honor of the French mathematician Abraham de Moivre (1667-1754) who introduced it to probability theory. The fundamental assumption behind the uniform distribution is that *none of the possible outcomes is more or less likely than any other.* The uniform distribution applies to continuous random variables, i.e., variables that can assume any values within a specified range.***

Let us say that a given random variable X is uniformly distributed over the interval from a to b. That is, the smallest value X can assume is a and the largest value it can assume is b. To determine the probability density function (pdf) of such a random variable, we need only remember that the total area under the graph of the pdf must equal 1. Since the pdf is constant throughout the interval on which X can assume values, the area underneath its graph is that of a rectangle — which can be determined by multiplying its base by its height. But we know the base of the rectangle to be (b-a), the width of the interval over which the random variable is distributed, and its area to be 1. Thus, the height of the rectangle must be **1/(b-a)**, which is also the probability density function of a uniform random variable over the region from a to b.

What is the mean of a uniformly distributed random variable? It is, conveniently, the halfway point of the interval from a to b, since half of the entire area under the graph of the pdf will be to the right of such a midway point, and half will be to the left. So the mean or mathematical expectation of a uniformly distributed random variable is **(b-a)/2**.

It is also possible to arrive at a convenient formula for the variance of such a uniform variable. Let us consider the following equation used for determining variance:

Var(X) = E(X^{2}) – E(X)^{2} , where X is our uniformly distributed random variable.

We already know that E(X) = (b-a)/2, so E(X)^{2} must equal (b-a)^{2}/4. To find E(X^{2}), we can use the definition of such an expectation as the definite integral of x^{2}*f(x) evaluated from b to a, where f(x) is the pdf of our random variable. We already know that f(x) = 1/(b-a); so E(X^{2}) is equal to the integral of x^{2}/(b-a), or x^{3}/3(b-a), evaluated from b to a, which becomes (b-a)^{3}/3(b-a), or (b-a)^{2}/3.

Thus, Var(X) = E(X^{2}) – E(X)^{2} = (b-a)^{2}/3 – (b-a)^{2}/4 = **(b-a) ^{2}/12**, which is the variance for any uniformly distributed random variable.