Browsed by
Tag: independent events

Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 18, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 2,100 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 18, 2014
***
When analyzing dependent events, the concept of conditional probability becomes a useful tool. The conditional probability of A given B is the probability that event A will occur, given that event B has occurred. In mathematical notation, the probability of A given B is expressed as P(A|B).
***

Bayes’ Theorem enables us to determine the probability of both of two dependent events occurring when we know the conditional probability of one of the events occurring, given that the other has occurred. Bayes’ Theorem states the probability of A and B occurring is equal to the product of the probability of B and the conditional probability of A given B or the product of the probability of A and the conditional probability of B given A:
P(A and B) = P(B)* P(A|B) = P(A)*P(B|A).

This theorem works for both independent and dependent events, but for independent events, the result is equivalent to what is given by the multiplication rule: P(A and B) = P(B)*P(A). Why is this the case? When two events are independent, the occurrence of one has no effect on whether the other will occur, so the probability of the event taking place should be equal to the conditional probability of that event given that the other event has taken place. So for independent events A and B: P(A) = P(A|B) and P(B) = P(B|A). If one ever wishes to determine whether two events are independent, it is possible to do so by computing their individual probabilities and their conditional probabilities and seeing if the former equal the latter.

The following sample problem can illustrate the kinds of probability questions that Bayes’ Theorem can be used to answer. This particular problem is of my own invention, but the first actuarial exam (Exam P) has been known to have other problems of this sort, which are virtually identical in format.

Problem: A company has four kinds of machines: A, B, C, and D. The probabilities that a machine of a given type will fail on a certain day are: 0.02 for A, 0.03 for B, 0.05 for C, and 0.15 for D. 10% of a company’s machines are of type A, 25% are of type B, 30% are of type C, and 35% are of type D. Given that a machine has failed on a certain day, what is the probability of the machine being of type B?

Solution: First, let us designate the event of a machine’s failure with the letter F. Thus, from the given information in the problem, P(A) = 0.10, P(B) = 0.25, P(C) = 0.3, and P(D) = 0.35. P(F|A) = 0.02, P(F|B) = 0.03, P(F|C) = 0.05, and P(F|D) = 0.15. We want to find P(B|F). By Bayes’ Theorem, P(B and F) = P(F)* P(B|F). We can transform this to

P(B|F) = P(B and F)/P(F). To solve this, we must determine P(B and F). By another application of Bayes’ Theorem, P(B and F) = P(B)* P(F|B) = 0.25*0.03 = 0.0075. Furthermore,

P(F) = P(A and F) + P(B and F) + P(C and F) + P(D and F)

P(F) = P(A)*P(F|A) + P(B)* P(F|B) + P(C)* P(F|C) + P(D)* P(F|D)

P(F) = 0.10*0.02 + 0.25*0.03 + 0.3*0.05 + 0.35*0.15 = 0.077So P(B|F) = P(B and F)/P(F) = 0.0075/0.077 = 15/154 or about 0.0974025974. Thus, if a machine has failed on a certain day, the probability that it is of type B is 15/154.

Ideas in Mathematics and Probability: Independent Events and Dependent Events (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Independent Events and Dependent Events (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 18, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 3,300 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 18, 2014
***
This essay discusses independent and dependent events and their role in probability theory and analyses.
***

Let us consider two events, A and B. If the probability that event A occurs has no effect on the probability that event B occurs, then A and B are independent events. A classic example of independent events is two tosses of the same fair coin. If a coin lands heads once, this has no influence on whether it will land heads again. The probability of landing heads on any given toss of the fair coin is ½.

It is a common error to presume that once a coin has landed heads for a number of times, this increases its probability of landing tails the next time it is tossed. If each toss is an independent event, this cannot be the case. Even if the coin has landed heads for 1000 consecutive times previously, its probability of landing heads the next time it is tossed is ½.

With two dependent events, on the other hand, the outcome of the first event affects the probability of the second. A classic example of such events would be drawing cards from a deck without replacement. A standard 52-card deck contains 4 aces. On the first draw, the probability of choosing an ace is 4/52 or 1/13. However, the probability of choosing an ace on the second draw will depend on whether an ace was selected on the first draw.

If an ace was selected on the first draw, there are 51 cards left to choose from, 3 of which are aces. So the probability of selecting an ace on the second draw is 3/51. But if an ace was not selected on the first draw, there are 4 aces left among 51 cards, so the probability of selecting an ace on the second draw is 4/51. Clearly, then, multiple drawings of cards from a deck without replacement are dependent events.

With any number of independent events, it is possible to use the multiplication rule to know the probability of some number of these events occurring. For example, if A, B, and C are independent events, and P(A) — the probability of A — is 1/3, P(B) is 3/5, and P(C) is 4/11, then the probability that A and B will occur is P(A)*P(B) = (1/3)(3/5) = 1/5. The probability that A, B, and C will occur is P(A)*P(B)*P(C) = (1/3)(3/5)(4/11) = 4/55.

It is important to only use the multiplication rule for independent events. With dependent events, the computation of probabilities for multiple events is not so straightforward and depends on the specific situation and dependence relationship among events. But further explorations of the world of probability theory will acquaint one with methods of analyzing probabilities of multiple dependent events as well.