### Browsed byTag: expectation

Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

## Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 4,800 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The uniform distribution is alternately known as the de Moivre distribution, in honor of the French mathematician Abraham de Moivre (1667-1754) who introduced it to probability theory. The fundamental assumption behind the uniform distribution is that none of the possible outcomes is more or less likely than any other. The uniform distribution applies to continuous random variables, i.e., variables that can assume any values within a specified range.***

Let us say that a given random variable X is uniformly distributed over the interval from a to b. That is, the smallest value X can assume is a and the largest value it can assume is b. To determine the probability density function (pdf) of such a random variable, we need only remember that the total area under the graph of the pdf must equal 1. Since the pdf is constant throughout the interval on which X can assume values, the area underneath its graph is that of a rectangle — which can be determined by multiplying its base by its height. But we know the base of the rectangle to be (b-a), the width of the interval over which the random variable is distributed, and its area to be 1. Thus, the height of the rectangle must be 1/(b-a), which is also the probability density function of a uniform random variable over the region from a to b.

What is the mean of a uniformly distributed random variable? It is, conveniently, the halfway point of the interval from a to b, since half of the entire area under the graph of the pdf will be to the right of such a midway point, and half will be to the left. So the mean or mathematical expectation of a uniformly distributed random variable is (b-a)/2.

It is also possible to arrive at a convenient formula for the variance of such a uniform variable. Let us consider the following equation used for determining variance:

Var(X) = E(X2) – E(X)2 , where X is our uniformly distributed random variable.

We already know that E(X) = (b-a)/2, so E(X)2 must equal (b-a)2/4. To find E(X2), we can use the definition of such an expectation as the definite integral of x2*f(x) evaluated from b to a, where f(x) is the pdf of our random variable. We already know that f(x) = 1/(b-a); so E(X2) is equal to the integral of x2/(b-a), or x3/3(b-a), evaluated from b to a, which becomes (b-a)3/3(b-a), or (b-a)2/3.

Thus, Var(X) = E(X2) – E(X)2 = (b-a)2/3 – (b-a)2/4 = (b-a)2/12, which is the variance for any uniformly distributed random variable.

Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

## Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 5,200 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***
Analyzing the variances of dependent variables and the sums of those variances is an essential aspect of statistics and actuarial science. The concept of covariance is an indispensable tool for such analysis.
***

Let us assume that there are two random variables, X and Y. We can call the mathematical expectations of each of these variables E(X) and E(Y) respectively, and their variances Var(X) and Var(Y) respectively. What do we do when we want to find the variance of the sum of the random variables, X+Y? If X and Y are independent variables, this is easy to determine; in that case, simple addition accomplishes the task: Var(X+Y) = Var(X) + Var(Y).

But what if X and Y are dependent? Then the variance of the sum most often does not simply equal sum of the variances. Instead, the idea of covariance must be applied to the analysis. We shall denote the covariance of X and Y as Cov(X, Y).

Two crucial formulas are needed in order to deal effectively with the covariance concept:

Var(X+Y) = Var(X) + Var(Y) + 2Cov(X, Y)

Cov(X, Y) = E(XY) – E(X)E(Y)

We note that these formulas work for both independent and dependent variables. For independent variables, Var(X+Y) = Var(X) + Var(Y), so Cov(X, Y) = 0. Similarly, for independent variables, E(XY) = E(X)E(Y), so Cov(X, Y) = 0.

This leads us to the general insight that the covariance of independent variables is equal to zero. Indeed, this makes conceptual sense as well. The covariance of two variables is a tool that tells us how much of an effect the variation in one of the variables has on the other variable. If two variables are independent, what happens to one has no effect on the other, so the variables’ covariance must be zero.

Covariances can be positive or negative, and the sign of the covariance can give useful information about the kind of relationship that exists between the random variables in question. If the covariance is positive, then there exists a direct relationship between two random variables; an increase in the values of one tends to also increase the values of the other. If the covariance is negative, then there exists an inverse relationship between two random variables; an increase in the values of one tends to decrease the values of the other, and vice versa.

In some problems involving covariance, it is possible to work from even the most basic information to determine the solution. When given random variables X and Y, if one can compute E(X), E(Y), E(X2), E(Y2), and E(XY), one will have all the data necessary to solve for Cov(X, Y) and Var(X+Y). From the way each random variable is defined, one can derive the mathematical expectations above and use them to arrive at the covariance and the variance of the sums for the two variables.

Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II

## Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 10,000 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The idea of expectation is crucial to probability theory and its applications. As one who has successfully passed actuarial Exam P on Probability, I would like to educate the general public about this interesting and useful mathematical concept.

The idea of expectation relies on some set of possible outcomes, each of which has a known probability and a known, quantifiable payoff — which can be positive or negative. Let us presume that we are playing a game called X with possible outcomes A, B, and C on a given turn. Each of these outcomes has a known probability P(A), P(B), and P(C) respectively. Each of the outcomes is associated with set payoffs a, b, and c, respectively. How much can one expect to win on an average turn of playing this game?

This is where the concept of expectation comes in. There is a P(A) probability of getting payoff a, a P(B) probability of getting payoff b, and a P(C) probability of getting payoff c. The expectation for a given turn of game X, E(X) is equal to the sum of the products of the probabilities for each given event and the payoffs for that event. So, in this case,

E(X) = a*P(A) + b*P(B) + c*P(C).

Now let us substitute some numbers to see how this concept could be applied. Let us say that event A has a probability of 0.45 of occurring, and if A occurs, you win \$50. B has probability of 0.15 of occurring, and if B occurs, you lose \$5. C has a probability of 0.4 of occurring, and if C occurs, you lose \$60. Should you play this game? Let us find out.

E(X) = a*P(A) + b*P(B) + c*P(C). Substituting the values given above, we find that E(X) = 50*0.45 + (-5)(0.15) + (-60)(0.40) = -2.25. So, on an average turn of the game, you can be expected to lose about \$2.25.

Note that this corresponds to neither of the three possible outcomes A, B, and C. But it does inform you of the kinds of results that you will approach if you play this game for a large number of turns. The Law of Large Numbers implies that the more times you play such a game, the more likely your average payoff per turn is to approach the expected value E(X). So if you play the game for 5 turns, you can be expected to lose 5*2.25 = \$11.25, but you will likely experience some deviation from this in the real world. Yet if you play the game for 100 turns, you can be expected to lose 100*2.25 = \$225, and your real-world outcome will most likely be quite close to this expected value.

In its more general form for some random variable X, the expectation of X or E(X) can be phrased as the sum of the products of all the possible outcomes x and their probabilities p(x). In mathematical notation, E(X) = sigma(x*p(x)) for all values of x. You can apply this formula to any discrete random variable, i. e., a random variable which assumes only a finite set of particular values.

For a continuous random variable Y, the mathematical expectation is equal to the integral of y*f(y) over the region on which the variable is defined. The function f(y) is called the probability density function of Y; its height over a given domain on a graph can be an indication the likelihood of the random variable assuming values over that domain.