Browsed by
Tag: probability theory

Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 18, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 2,100 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 18, 2014
***
When analyzing dependent events, the concept of conditional probability becomes a useful tool. The conditional probability of A given B is the probability that event A will occur, given that event B has occurred. In mathematical notation, the probability of A given B is expressed as P(A|B).
***

Bayes’ Theorem enables us to determine the probability of both of two dependent events occurring when we know the conditional probability of one of the events occurring, given that the other has occurred. Bayes’ Theorem states the probability of A and B occurring is equal to the product of the probability of B and the conditional probability of A given B or the product of the probability of A and the conditional probability of B given A:
P(A and B) = P(B)* P(A|B) = P(A)*P(B|A).

This theorem works for both independent and dependent events, but for independent events, the result is equivalent to what is given by the multiplication rule: P(A and B) = P(B)*P(A). Why is this the case? When two events are independent, the occurrence of one has no effect on whether the other will occur, so the probability of the event taking place should be equal to the conditional probability of that event given that the other event has taken place. So for independent events A and B: P(A) = P(A|B) and P(B) = P(B|A). If one ever wishes to determine whether two events are independent, it is possible to do so by computing their individual probabilities and their conditional probabilities and seeing if the former equal the latter.

The following sample problem can illustrate the kinds of probability questions that Bayes’ Theorem can be used to answer. This particular problem is of my own invention, but the first actuarial exam (Exam P) has been known to have other problems of this sort, which are virtually identical in format.

Problem: A company has four kinds of machines: A, B, C, and D. The probabilities that a machine of a given type will fail on a certain day are: 0.02 for A, 0.03 for B, 0.05 for C, and 0.15 for D. 10% of a company’s machines are of type A, 25% are of type B, 30% are of type C, and 35% are of type D. Given that a machine has failed on a certain day, what is the probability of the machine being of type B?

Solution: First, let us designate the event of a machine’s failure with the letter F. Thus, from the given information in the problem, P(A) = 0.10, P(B) = 0.25, P(C) = 0.3, and P(D) = 0.35. P(F|A) = 0.02, P(F|B) = 0.03, P(F|C) = 0.05, and P(F|D) = 0.15. We want to find P(B|F). By Bayes’ Theorem, P(B and F) = P(F)* P(B|F). We can transform this to

P(B|F) = P(B and F)/P(F). To solve this, we must determine P(B and F). By another application of Bayes’ Theorem, P(B and F) = P(B)* P(F|B) = 0.25*0.03 = 0.0075. Furthermore,

P(F) = P(A and F) + P(B and F) + P(C and F) + P(D and F)

P(F) = P(A)*P(F|A) + P(B)* P(F|B) + P(C)* P(F|C) + P(D)* P(F|D)

P(F) = 0.10*0.02 + 0.25*0.03 + 0.3*0.05 + 0.35*0.15 = 0.077So P(B|F) = P(B and F)/P(F) = 0.0075/0.077 = 15/154 or about 0.0974025974. Thus, if a machine has failed on a certain day, the probability that it is of type B is 15/154.

Ideas in Mathematics and Probability: Independent Events and Dependent Events (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Independent Events and Dependent Events (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 18, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 3,300 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 18, 2014
***
This essay discusses independent and dependent events and their role in probability theory and analyses.
***

Let us consider two events, A and B. If the probability that event A occurs has no effect on the probability that event B occurs, then A and B are independent events. A classic example of independent events is two tosses of the same fair coin. If a coin lands heads once, this has no influence on whether it will land heads again. The probability of landing heads on any given toss of the fair coin is ½.

It is a common error to presume that once a coin has landed heads for a number of times, this increases its probability of landing tails the next time it is tossed. If each toss is an independent event, this cannot be the case. Even if the coin has landed heads for 1000 consecutive times previously, its probability of landing heads the next time it is tossed is ½.

With two dependent events, on the other hand, the outcome of the first event affects the probability of the second. A classic example of such events would be drawing cards from a deck without replacement. A standard 52-card deck contains 4 aces. On the first draw, the probability of choosing an ace is 4/52 or 1/13. However, the probability of choosing an ace on the second draw will depend on whether an ace was selected on the first draw.

If an ace was selected on the first draw, there are 51 cards left to choose from, 3 of which are aces. So the probability of selecting an ace on the second draw is 3/51. But if an ace was not selected on the first draw, there are 4 aces left among 51 cards, so the probability of selecting an ace on the second draw is 4/51. Clearly, then, multiple drawings of cards from a deck without replacement are dependent events.

With any number of independent events, it is possible to use the multiplication rule to know the probability of some number of these events occurring. For example, if A, B, and C are independent events, and P(A) — the probability of A — is 1/3, P(B) is 3/5, and P(C) is 4/11, then the probability that A and B will occur is P(A)*P(B) = (1/3)(3/5) = 1/5. The probability that A, B, and C will occur is P(A)*P(B)*P(C) = (1/3)(3/5)(4/11) = 4/55.

It is important to only use the multiplication rule for independent events. With dependent events, the computation of probabilities for multiple events is not so straightforward and depends on the specific situation and dependence relationship among events. But further explorations of the world of probability theory will acquaint one with methods of analyzing probabilities of multiple dependent events as well.

Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 4,800 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The uniform distribution is alternately known as the de Moivre distribution, in honor of the French mathematician Abraham de Moivre (1667-1754) who introduced it to probability theory. The fundamental assumption behind the uniform distribution is that none of the possible outcomes is more or less likely than any other. The uniform distribution applies to continuous random variables, i.e., variables that can assume any values within a specified range.***

Let us say that a given random variable X is uniformly distributed over the interval from a to b. That is, the smallest value X can assume is a and the largest value it can assume is b. To determine the probability density function (pdf) of such a random variable, we need only remember that the total area under the graph of the pdf must equal 1. Since the pdf is constant throughout the interval on which X can assume values, the area underneath its graph is that of a rectangle — which can be determined by multiplying its base by its height. But we know the base of the rectangle to be (b-a), the width of the interval over which the random variable is distributed, and its area to be 1. Thus, the height of the rectangle must be 1/(b-a), which is also the probability density function of a uniform random variable over the region from a to b.

What is the mean of a uniformly distributed random variable? It is, conveniently, the halfway point of the interval from a to b, since half of the entire area under the graph of the pdf will be to the right of such a midway point, and half will be to the left. So the mean or mathematical expectation of a uniformly distributed random variable is (b-a)/2.

It is also possible to arrive at a convenient formula for the variance of such a uniform variable. Let us consider the following equation used for determining variance:

Var(X) = E(X2) – E(X)2 , where X is our uniformly distributed random variable.

We already know that E(X) = (b-a)/2, so E(X)2 must equal (b-a)2/4. To find E(X2), we can use the definition of such an expectation as the definite integral of x2*f(x) evaluated from b to a, where f(x) is the pdf of our random variable. We already know that f(x) = 1/(b-a); so E(X2) is equal to the integral of x2/(b-a), or x3/3(b-a), evaluated from b to a, which becomes (b-a)3/3(b-a), or (b-a)2/3.

Thus, Var(X) = E(X2) – E(X)2 = (b-a)2/3 – (b-a)2/4 = (b-a)2/12, which is the variance for any uniformly distributed random variable.

What You Need to Know for Actuarial Exam P (2007) – Article by G. Stolyarov II

What You Need to Know for Actuarial Exam P (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 9, 2014
******************************
This essay, originally written and published on Yahoo! Voices in 2007, has helped many actuarial candidates to study for Exam P and has garnered over 15,000 views to date. I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time. While it has been over 7 years since I took and passed Actuarial Exam P, the fundamental advice in this article remains relevant, and I hope that it will assist many actuarial candidates for years to come. 

***
~ G. Stolyarov II, July 9, 2014
***

This is a companion article to “How to Study for Actuarial Exam P Without Paying for Materials“.

If you desire to become an actuary, then passing Exam P on Probability is your opportunity to enter the actuarial science profession and get a starting salary ranging of about $46,000 to about $67,000 per year. But the colossal number of topics listed on the syllabus may seem intimidating to many. Fortunately, you do not need to know all of them to get high grades on the exam. In May 2007, I passed Exam P with a the highest possible grade of 10 and can offer some advice on what you need to know in order to do well.

Of course, you need to know the basics of probability theory, including the addition and multiplication rules, mutually independent and dependent events, conditional probabilities, and Bayes’ Theorem. These topics are quite straightforward and do not require knowledge of calculus or any other kind of advanced mathematics; you need to be able to add, multiply, divide, and think logically about the situation presented in the problem — which will often be described in words. Visual aids, such as Venn Diagrams, contingency tables, and the use of union and intersection notation can be eminently helpful here. Try to master these general probability topics before moving on to the more difficult univariate and multivariate probability distributions.

Next, you will need to know several critically important univariate probability distributions, including some of their special properties. Fortunately, you do not need to know as many as the syllabus suggests.

The Society of Actuaries (SOA) recommends that you learn the “binomial, negative binomial, geometric, hypergeometric, Poisson, uniform, exponential, chi-square, beta, Pareto, lognormal, gamma, Weibull, and normal” distributions, but in fact the ones you will be tested on most extensively are just the binomial, negative binomial, geometric, Poisson, uniform, exponential, and normal. Make sure you know those seven in exhaustive detail, though, because much of the test concerns them. It is a good idea to memorize the formulas for these distributions’ probability density functions, survival functions, means, and variances. Also be able to do computations with the normal distribution using the provided table of areas under the normal curve. Knowledge of calculus, integration, and analysis of discrete finite and infinite sums is necessary to master the univariate probability distributions on Exam P.

Also pay attention to applications of univariate probability distributions to the insurance sector; know how to solve every kind of problem which involves deductibles and claim limits, because a significant portion of the problems on the test will employ these concepts. Study the SOA’s past exam questions and solutions and read the study note on “Risk and Insurance” to get extensive exposure to these applications of probability theory.

The multivariate probability concepts on Exam P are among the most challenging. They require a solid grasp of double integrals and firm knowledge of joint, marginal, and conditional probability distributions – as well as the ability to derive any one of these kinds of distributions from the others. Moreover, many of the problems on the test involve moment-generating functions and their properties – a subject that deserves extensive study and practice in its own right.

Furthermore, make sure that you have a solid grasp of the concepts of expectation, variance, standard deviation, covariance, and correlation. Indeed, try to master the problems involving variances and covariances of multiple random variables; these problems become easy once you make a habit of doing them; solving them quickly and effectively will save a lot of time on the exam and boost your grade. Also make sure that you study the Central Limit Theorem and are able to do problems involving it; this is not a difficult concept once you are conversant with the normal distribution, and mastering Central Limit problems can go a long way to enhance your performance as well.

Studying the topics mentioned here can focus your preparation for Exam P and enable you to practice effectively and confidently. Remember, though, that this is still a lot of material. You would be well advised to begin studying for the test at least three months in advance and to study consistently on a daily basis. Practice often with every kind of problem so as to keep your memory and skills fresh. Best wishes on the exam.