### Browsed byTag: Exam P

Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II

## Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II G. Stolyarov II
July 18, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 2,100 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 18, 2014
***
When analyzing dependent events, the concept of conditional probability becomes a useful tool. The conditional probability of A given B is the probability that event A will occur, given that event B has occurred. In mathematical notation, the probability of A given B is expressed as P(A|B).
***

Bayes’ Theorem enables us to determine the probability of both of two dependent events occurring when we know the conditional probability of one of the events occurring, given that the other has occurred. Bayes’ Theorem states the probability of A and B occurring is equal to the product of the probability of B and the conditional probability of A given B or the product of the probability of A and the conditional probability of B given A:
P(A and B) = P(B)* P(A|B) = P(A)*P(B|A).

This theorem works for both independent and dependent events, but for independent events, the result is equivalent to what is given by the multiplication rule: P(A and B) = P(B)*P(A). Why is this the case? When two events are independent, the occurrence of one has no effect on whether the other will occur, so the probability of the event taking place should be equal to the conditional probability of that event given that the other event has taken place. So for independent events A and B: P(A) = P(A|B) and P(B) = P(B|A). If one ever wishes to determine whether two events are independent, it is possible to do so by computing their individual probabilities and their conditional probabilities and seeing if the former equal the latter.

The following sample problem can illustrate the kinds of probability questions that Bayes’ Theorem can be used to answer. This particular problem is of my own invention, but the first actuarial exam (Exam P) has been known to have other problems of this sort, which are virtually identical in format.

Problem: A company has four kinds of machines: A, B, C, and D. The probabilities that a machine of a given type will fail on a certain day are: 0.02 for A, 0.03 for B, 0.05 for C, and 0.15 for D. 10% of a company’s machines are of type A, 25% are of type B, 30% are of type C, and 35% are of type D. Given that a machine has failed on a certain day, what is the probability of the machine being of type B?

Solution: First, let us designate the event of a machine’s failure with the letter F. Thus, from the given information in the problem, P(A) = 0.10, P(B) = 0.25, P(C) = 0.3, and P(D) = 0.35. P(F|A) = 0.02, P(F|B) = 0.03, P(F|C) = 0.05, and P(F|D) = 0.15. We want to find P(B|F). By Bayes’ Theorem, P(B and F) = P(F)* P(B|F). We can transform this to

P(B|F) = P(B and F)/P(F). To solve this, we must determine P(B and F). By another application of Bayes’ Theorem, P(B and F) = P(B)* P(F|B) = 0.25*0.03 = 0.0075. Furthermore,

P(F) = P(A and F) + P(B and F) + P(C and F) + P(D and F)

P(F) = P(A)*P(F|A) + P(B)* P(F|B) + P(C)* P(F|C) + P(D)* P(F|D)

P(F) = 0.10*0.02 + 0.25*0.03 + 0.3*0.05 + 0.35*0.15 = 0.077So P(B|F) = P(B and F)/P(F) = 0.0075/0.077 = 15/154 or about 0.0974025974. Thus, if a machine has failed on a certain day, the probability that it is of type B is 15/154.

Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

## Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 4,800 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The uniform distribution is alternately known as the de Moivre distribution, in honor of the French mathematician Abraham de Moivre (1667-1754) who introduced it to probability theory. The fundamental assumption behind the uniform distribution is that none of the possible outcomes is more or less likely than any other. The uniform distribution applies to continuous random variables, i.e., variables that can assume any values within a specified range.***

Let us say that a given random variable X is uniformly distributed over the interval from a to b. That is, the smallest value X can assume is a and the largest value it can assume is b. To determine the probability density function (pdf) of such a random variable, we need only remember that the total area under the graph of the pdf must equal 1. Since the pdf is constant throughout the interval on which X can assume values, the area underneath its graph is that of a rectangle — which can be determined by multiplying its base by its height. But we know the base of the rectangle to be (b-a), the width of the interval over which the random variable is distributed, and its area to be 1. Thus, the height of the rectangle must be 1/(b-a), which is also the probability density function of a uniform random variable over the region from a to b.

What is the mean of a uniformly distributed random variable? It is, conveniently, the halfway point of the interval from a to b, since half of the entire area under the graph of the pdf will be to the right of such a midway point, and half will be to the left. So the mean or mathematical expectation of a uniformly distributed random variable is (b-a)/2.

It is also possible to arrive at a convenient formula for the variance of such a uniform variable. Let us consider the following equation used for determining variance:

Var(X) = E(X2) – E(X)2 , where X is our uniformly distributed random variable.

We already know that E(X) = (b-a)/2, so E(X)2 must equal (b-a)2/4. To find E(X2), we can use the definition of such an expectation as the definite integral of x2*f(x) evaluated from b to a, where f(x) is the pdf of our random variable. We already know that f(x) = 1/(b-a); so E(X2) is equal to the integral of x2/(b-a), or x3/3(b-a), evaluated from b to a, which becomes (b-a)3/3(b-a), or (b-a)2/3.

Thus, Var(X) = E(X2) – E(X)2 = (b-a)2/3 – (b-a)2/4 = (b-a)2/12, which is the variance for any uniformly distributed random variable.

Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

## Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 5,200 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***
Analyzing the variances of dependent variables and the sums of those variances is an essential aspect of statistics and actuarial science. The concept of covariance is an indispensable tool for such analysis.
***

Let us assume that there are two random variables, X and Y. We can call the mathematical expectations of each of these variables E(X) and E(Y) respectively, and their variances Var(X) and Var(Y) respectively. What do we do when we want to find the variance of the sum of the random variables, X+Y? If X and Y are independent variables, this is easy to determine; in that case, simple addition accomplishes the task: Var(X+Y) = Var(X) + Var(Y).

But what if X and Y are dependent? Then the variance of the sum most often does not simply equal sum of the variances. Instead, the idea of covariance must be applied to the analysis. We shall denote the covariance of X and Y as Cov(X, Y).

Two crucial formulas are needed in order to deal effectively with the covariance concept:

Var(X+Y) = Var(X) + Var(Y) + 2Cov(X, Y)

Cov(X, Y) = E(XY) – E(X)E(Y)

We note that these formulas work for both independent and dependent variables. For independent variables, Var(X+Y) = Var(X) + Var(Y), so Cov(X, Y) = 0. Similarly, for independent variables, E(XY) = E(X)E(Y), so Cov(X, Y) = 0.

This leads us to the general insight that the covariance of independent variables is equal to zero. Indeed, this makes conceptual sense as well. The covariance of two variables is a tool that tells us how much of an effect the variation in one of the variables has on the other variable. If two variables are independent, what happens to one has no effect on the other, so the variables’ covariance must be zero.

Covariances can be positive or negative, and the sign of the covariance can give useful information about the kind of relationship that exists between the random variables in question. If the covariance is positive, then there exists a direct relationship between two random variables; an increase in the values of one tends to also increase the values of the other. If the covariance is negative, then there exists an inverse relationship between two random variables; an increase in the values of one tends to decrease the values of the other, and vice versa.

In some problems involving covariance, it is possible to work from even the most basic information to determine the solution. When given random variables X and Y, if one can compute E(X), E(Y), E(X2), E(Y2), and E(XY), one will have all the data necessary to solve for Cov(X, Y) and Var(X+Y). From the way each random variable is defined, one can derive the mathematical expectations above and use them to arrive at the covariance and the variance of the sums for the two variables.

Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II

## Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 10,000 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The idea of expectation is crucial to probability theory and its applications. As one who has successfully passed actuarial Exam P on Probability, I would like to educate the general public about this interesting and useful mathematical concept.

The idea of expectation relies on some set of possible outcomes, each of which has a known probability and a known, quantifiable payoff — which can be positive or negative. Let us presume that we are playing a game called X with possible outcomes A, B, and C on a given turn. Each of these outcomes has a known probability P(A), P(B), and P(C) respectively. Each of the outcomes is associated with set payoffs a, b, and c, respectively. How much can one expect to win on an average turn of playing this game?

This is where the concept of expectation comes in. There is a P(A) probability of getting payoff a, a P(B) probability of getting payoff b, and a P(C) probability of getting payoff c. The expectation for a given turn of game X, E(X) is equal to the sum of the products of the probabilities for each given event and the payoffs for that event. So, in this case,

E(X) = a*P(A) + b*P(B) + c*P(C).

Now let us substitute some numbers to see how this concept could be applied. Let us say that event A has a probability of 0.45 of occurring, and if A occurs, you win \$50. B has probability of 0.15 of occurring, and if B occurs, you lose \$5. C has a probability of 0.4 of occurring, and if C occurs, you lose \$60. Should you play this game? Let us find out.

E(X) = a*P(A) + b*P(B) + c*P(C). Substituting the values given above, we find that E(X) = 50*0.45 + (-5)(0.15) + (-60)(0.40) = -2.25. So, on an average turn of the game, you can be expected to lose about \$2.25.

Note that this corresponds to neither of the three possible outcomes A, B, and C. But it does inform you of the kinds of results that you will approach if you play this game for a large number of turns. The Law of Large Numbers implies that the more times you play such a game, the more likely your average payoff per turn is to approach the expected value E(X). So if you play the game for 5 turns, you can be expected to lose 5*2.25 = \$11.25, but you will likely experience some deviation from this in the real world. Yet if you play the game for 100 turns, you can be expected to lose 100*2.25 = \$225, and your real-world outcome will most likely be quite close to this expected value.

In its more general form for some random variable X, the expectation of X or E(X) can be phrased as the sum of the products of all the possible outcomes x and their probabilities p(x). In mathematical notation, E(X) = sigma(x*p(x)) for all values of x. You can apply this formula to any discrete random variable, i. e., a random variable which assumes only a finite set of particular values.

For a continuous random variable Y, the mathematical expectation is equal to the integral of y*f(y) over the region on which the variable is defined. The function f(y) is called the probability density function of Y; its height over a given domain on a graph can be an indication the likelihood of the random variable assuming values over that domain.

What You Need to Know for Actuarial Exam P (2007) – Article by G. Stolyarov II

## What You Need to Know for Actuarial Exam P (2007) – Article by G. Stolyarov II G. Stolyarov II
July 9, 2014
******************************
This essay, originally written and published on Yahoo! Voices in 2007, has helped many actuarial candidates to study for Exam P and has garnered over 15,000 views to date. I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time. While it has been over 7 years since I took and passed Actuarial Exam P, the fundamental advice in this article remains relevant, and I hope that it will assist many actuarial candidates for years to come.

***
~ G. Stolyarov II, July 9, 2014
***

This is a companion article to “How to Study for Actuarial Exam P Without Paying for Materials“.

If you desire to become an actuary, then passing Exam P on Probability is your opportunity to enter the actuarial science profession and get a starting salary ranging of about \$46,000 to about \$67,000 per year. But the colossal number of topics listed on the syllabus may seem intimidating to many. Fortunately, you do not need to know all of them to get high grades on the exam. In May 2007, I passed Exam P with a the highest possible grade of 10 and can offer some advice on what you need to know in order to do well.

Of course, you need to know the basics of probability theory, including the addition and multiplication rules, mutually independent and dependent events, conditional probabilities, and Bayes’ Theorem. These topics are quite straightforward and do not require knowledge of calculus or any other kind of advanced mathematics; you need to be able to add, multiply, divide, and think logically about the situation presented in the problem — which will often be described in words. Visual aids, such as Venn Diagrams, contingency tables, and the use of union and intersection notation can be eminently helpful here. Try to master these general probability topics before moving on to the more difficult univariate and multivariate probability distributions.

Next, you will need to know several critically important univariate probability distributions, including some of their special properties. Fortunately, you do not need to know as many as the syllabus suggests.

The Society of Actuaries (SOA) recommends that you learn the “binomial, negative binomial, geometric, hypergeometric, Poisson, uniform, exponential, chi-square, beta, Pareto, lognormal, gamma, Weibull, and normal” distributions, but in fact the ones you will be tested on most extensively are just the binomial, negative binomial, geometric, Poisson, uniform, exponential, and normal. Make sure you know those seven in exhaustive detail, though, because much of the test concerns them. It is a good idea to memorize the formulas for these distributions’ probability density functions, survival functions, means, and variances. Also be able to do computations with the normal distribution using the provided table of areas under the normal curve. Knowledge of calculus, integration, and analysis of discrete finite and infinite sums is necessary to master the univariate probability distributions on Exam P.

Also pay attention to applications of univariate probability distributions to the insurance sector; know how to solve every kind of problem which involves deductibles and claim limits, because a significant portion of the problems on the test will employ these concepts. Study the SOA’s past exam questions and solutions and read the study note on “Risk and Insurance” to get extensive exposure to these applications of probability theory.

The multivariate probability concepts on Exam P are among the most challenging. They require a solid grasp of double integrals and firm knowledge of joint, marginal, and conditional probability distributions – as well as the ability to derive any one of these kinds of distributions from the others. Moreover, many of the problems on the test involve moment-generating functions and their properties – a subject that deserves extensive study and practice in its own right.

Furthermore, make sure that you have a solid grasp of the concepts of expectation, variance, standard deviation, covariance, and correlation. Indeed, try to master the problems involving variances and covariances of multiple random variables; these problems become easy once you make a habit of doing them; solving them quickly and effectively will save a lot of time on the exam and boost your grade. Also make sure that you study the Central Limit Theorem and are able to do problems involving it; this is not a difficult concept once you are conversant with the normal distribution, and mastering Central Limit problems can go a long way to enhance your performance as well.

Studying the topics mentioned here can focus your preparation for Exam P and enable you to practice effectively and confidently. Remember, though, that this is still a lot of material. You would be well advised to begin studying for the test at least three months in advance and to study consistently on a daily basis. Practice often with every kind of problem so as to keep your memory and skills fresh. Best wishes on the exam.

How to Study for Actuarial Exam P Without Paying for Materials (2007) – Article by G. Stolyarov II

## How to Study for Actuarial Exam P Without Paying for Materials (2007) – Article by G. Stolyarov II G. Stolyarov II
July 9, 2014
******************************
This essay, originally written and published on Yahoo! Voices in 2007, is my most-viewed article and second-most-viewed work of all time, at over 81,600 views to date. I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time. While it has been over 7 years since I took and passed Actuarial Exam P, the fundamental advice in this article remains relevant, and I hope that it will assist many actuarial candidates for years to come.

***
~ G. Stolyarov II, July 9, 2014
***
Exam P on Probability, offered by the Society of Actuaries (formerly in conjunction with the Casualty Actuarial Society, which referred to it as Exam 1), is the gateway to the actuarial profession. Those who pass the exam can obtain entry-level jobs as actuaries, with salaries ranging from about \$46,000 to about \$67,000 per year. After some rigorous studying, I passed this examination in May 2007 with a grade of 10 – the highest possible. Here are some study materials that can help you obtain top marks on Exam P without paying a cent.
***

The breadth of material listed on the syllabus for this test is extensive, and many of the topics are tremendously complex in themselves. Fortunately not all of the topics listed are actually tested, and the kinds of questions that are asked on the exam are generally more reasonable and straightforward than the ones present in the recommended readings.

As I found out through personal experience, you do not need to spend money at all in purchasing study materials for this exam. Virtually everything you need can already be found online. The most crucial study aid is the list of sample questions from past exams, generously provided by the Society of Actuaries. Along with these questions, you will also find a list of step-by-step solutions which will enable you to check your work. For successful performance on the test, it is essential to be able to successfully solve these problems on your own and to know why you obtained the solutions you did. The problems on the exam are remarkably similar to the ones in the sample questions, so you should do well on your exam if you can solve the problems from prior tests.

In the course of my own studying, I made the mistake of purchasing Michael A. Bean’s Probability: The Science of Uncertainty: a book which does an extremely poor job at explaining the mathematical concepts required for the actuarial exam, because it already presupposes the reader’s expert knowledge of such concepts. Too often, crucial explanations and proofs are omitted from this book, to be left as “exercises to the reader”– quite a challenge for a reader who simply seeks a basic grasp of the subject!

Furthermore, the exercises in Bean’s book are not conducive to learning the essentials of the probability concepts discussed; these problems are instead so convoluted and laden with unnecessary complications as to baffle even the expert mathematician. Exam P itself is much more reasonable than that; the problems often require some thinking and multiple steps, but you will not be required to pull brilliant, esoteric insights out of thin air, as Bean’s exercises require you to do. To add to the trouble, Bean does not provide answers in the back of the book for most of his problems — thus disabling you from checking your work.