Browsed by
Tag: variable

Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 5,200 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***
Analyzing the variances of dependent variables and the sums of those variances is an essential aspect of statistics and actuarial science. The concept of covariance is an indispensable tool for such analysis.
***

Let us assume that there are two random variables, X and Y. We can call the mathematical expectations of each of these variables E(X) and E(Y) respectively, and their variances Var(X) and Var(Y) respectively. What do we do when we want to find the variance of the sum of the random variables, X+Y? If X and Y are independent variables, this is easy to determine; in that case, simple addition accomplishes the task: Var(X+Y) = Var(X) + Var(Y).

But what if X and Y are dependent? Then the variance of the sum most often does not simply equal sum of the variances. Instead, the idea of covariance must be applied to the analysis. We shall denote the covariance of X and Y as Cov(X, Y).

Two crucial formulas are needed in order to deal effectively with the covariance concept:

Var(X+Y) = Var(X) + Var(Y) + 2Cov(X, Y)

Cov(X, Y) = E(XY) – E(X)E(Y)

We note that these formulas work for both independent and dependent variables. For independent variables, Var(X+Y) = Var(X) + Var(Y), so Cov(X, Y) = 0. Similarly, for independent variables, E(XY) = E(X)E(Y), so Cov(X, Y) = 0.

This leads us to the general insight that the covariance of independent variables is equal to zero. Indeed, this makes conceptual sense as well. The covariance of two variables is a tool that tells us how much of an effect the variation in one of the variables has on the other variable. If two variables are independent, what happens to one has no effect on the other, so the variables’ covariance must be zero.

Covariances can be positive or negative, and the sign of the covariance can give useful information about the kind of relationship that exists between the random variables in question. If the covariance is positive, then there exists a direct relationship between two random variables; an increase in the values of one tends to also increase the values of the other. If the covariance is negative, then there exists an inverse relationship between two random variables; an increase in the values of one tends to decrease the values of the other, and vice versa.

In some problems involving covariance, it is possible to work from even the most basic information to determine the solution. When given random variables X and Y, if one can compute E(X), E(Y), E(X2), E(Y2), and E(XY), one will have all the data necessary to solve for Cov(X, Y) and Var(X+Y). From the way each random variable is defined, one can derive the mathematical expectations above and use them to arrive at the covariance and the variance of the sums for the two variables.

Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II

Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 10,000 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The idea of expectation is crucial to probability theory and its applications. As one who has successfully passed actuarial Exam P on Probability, I would like to educate the general public about this interesting and useful mathematical concept.

The idea of expectation relies on some set of possible outcomes, each of which has a known probability and a known, quantifiable payoff — which can be positive or negative. Let us presume that we are playing a game called X with possible outcomes A, B, and C on a given turn. Each of these outcomes has a known probability P(A), P(B), and P(C) respectively. Each of the outcomes is associated with set payoffs a, b, and c, respectively. How much can one expect to win on an average turn of playing this game?

This is where the concept of expectation comes in. There is a P(A) probability of getting payoff a, a P(B) probability of getting payoff b, and a P(C) probability of getting payoff c. The expectation for a given turn of game X, E(X) is equal to the sum of the products of the probabilities for each given event and the payoffs for that event. So, in this case,

E(X) = a*P(A) + b*P(B) + c*P(C).

Now let us substitute some numbers to see how this concept could be applied. Let us say that event A has a probability of 0.45 of occurring, and if A occurs, you win $50. B has probability of 0.15 of occurring, and if B occurs, you lose $5. C has a probability of 0.4 of occurring, and if C occurs, you lose $60. Should you play this game? Let us find out.

E(X) = a*P(A) + b*P(B) + c*P(C). Substituting the values given above, we find that E(X) = 50*0.45 + (-5)(0.15) + (-60)(0.40) = -2.25. So, on an average turn of the game, you can be expected to lose about $2.25.

Note that this corresponds to neither of the three possible outcomes A, B, and C. But it does inform you of the kinds of results that you will approach if you play this game for a large number of turns. The Law of Large Numbers implies that the more times you play such a game, the more likely your average payoff per turn is to approach the expected value E(X). So if you play the game for 5 turns, you can be expected to lose 5*2.25 = $11.25, but you will likely experience some deviation from this in the real world. Yet if you play the game for 100 turns, you can be expected to lose 100*2.25 = $225, and your real-world outcome will most likely be quite close to this expected value.

In its more general form for some random variable X, the expectation of X or E(X) can be phrased as the sum of the products of all the possible outcomes x and their probabilities p(x). In mathematical notation, E(X) = sigma(x*p(x)) for all values of x. You can apply this formula to any discrete random variable, i. e., a random variable which assumes only a finite set of particular values.

For a continuous random variable Y, the mathematical expectation is equal to the integral of y*f(y) over the region on which the variable is defined. The function f(y) is called the probability density function of Y; its height over a given domain on a graph can be an indication the likelihood of the random variable assuming values over that domain.