### Browsed byTag: variance

Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

## Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 4,800 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The uniform distribution is alternately known as the de Moivre distribution, in honor of the French mathematician Abraham de Moivre (1667-1754) who introduced it to probability theory. The fundamental assumption behind the uniform distribution is that none of the possible outcomes is more or less likely than any other. The uniform distribution applies to continuous random variables, i.e., variables that can assume any values within a specified range.***

Let us say that a given random variable X is uniformly distributed over the interval from a to b. That is, the smallest value X can assume is a and the largest value it can assume is b. To determine the probability density function (pdf) of such a random variable, we need only remember that the total area under the graph of the pdf must equal 1. Since the pdf is constant throughout the interval on which X can assume values, the area underneath its graph is that of a rectangle — which can be determined by multiplying its base by its height. But we know the base of the rectangle to be (b-a), the width of the interval over which the random variable is distributed, and its area to be 1. Thus, the height of the rectangle must be 1/(b-a), which is also the probability density function of a uniform random variable over the region from a to b.

What is the mean of a uniformly distributed random variable? It is, conveniently, the halfway point of the interval from a to b, since half of the entire area under the graph of the pdf will be to the right of such a midway point, and half will be to the left. So the mean or mathematical expectation of a uniformly distributed random variable is (b-a)/2.

It is also possible to arrive at a convenient formula for the variance of such a uniform variable. Let us consider the following equation used for determining variance:

Var(X) = E(X2) – E(X)2 , where X is our uniformly distributed random variable.

We already know that E(X) = (b-a)/2, so E(X)2 must equal (b-a)2/4. To find E(X2), we can use the definition of such an expectation as the definite integral of x2*f(x) evaluated from b to a, where f(x) is the pdf of our random variable. We already know that f(x) = 1/(b-a); so E(X2) is equal to the integral of x2/(b-a), or x3/3(b-a), evaluated from b to a, which becomes (b-a)3/3(b-a), or (b-a)2/3.

Thus, Var(X) = E(X2) – E(X)2 = (b-a)2/3 – (b-a)2/4 = (b-a)2/12, which is the variance for any uniformly distributed random variable.

Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

## Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 5,200 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***
Analyzing the variances of dependent variables and the sums of those variances is an essential aspect of statistics and actuarial science. The concept of covariance is an indispensable tool for such analysis.
***

Let us assume that there are two random variables, X and Y. We can call the mathematical expectations of each of these variables E(X) and E(Y) respectively, and their variances Var(X) and Var(Y) respectively. What do we do when we want to find the variance of the sum of the random variables, X+Y? If X and Y are independent variables, this is easy to determine; in that case, simple addition accomplishes the task: Var(X+Y) = Var(X) + Var(Y).

But what if X and Y are dependent? Then the variance of the sum most often does not simply equal sum of the variances. Instead, the idea of covariance must be applied to the analysis. We shall denote the covariance of X and Y as Cov(X, Y).

Two crucial formulas are needed in order to deal effectively with the covariance concept:

Var(X+Y) = Var(X) + Var(Y) + 2Cov(X, Y)

Cov(X, Y) = E(XY) – E(X)E(Y)

We note that these formulas work for both independent and dependent variables. For independent variables, Var(X+Y) = Var(X) + Var(Y), so Cov(X, Y) = 0. Similarly, for independent variables, E(XY) = E(X)E(Y), so Cov(X, Y) = 0.

This leads us to the general insight that the covariance of independent variables is equal to zero. Indeed, this makes conceptual sense as well. The covariance of two variables is a tool that tells us how much of an effect the variation in one of the variables has on the other variable. If two variables are independent, what happens to one has no effect on the other, so the variables’ covariance must be zero.

Covariances can be positive or negative, and the sign of the covariance can give useful information about the kind of relationship that exists between the random variables in question. If the covariance is positive, then there exists a direct relationship between two random variables; an increase in the values of one tends to also increase the values of the other. If the covariance is negative, then there exists an inverse relationship between two random variables; an increase in the values of one tends to decrease the values of the other, and vice versa.

In some problems involving covariance, it is possible to work from even the most basic information to determine the solution. When given random variables X and Y, if one can compute E(X), E(Y), E(X2), E(Y2), and E(XY), one will have all the data necessary to solve for Cov(X, Y) and Var(X+Y). From the way each random variable is defined, one can derive the mathematical expectations above and use them to arrive at the covariance and the variance of the sums for the two variables.