Browsed by
Category: Mathematics

Abstract Orderism Fractal 75 – Art by Gennady Stolyarov II

Abstract Orderism Fractal 75 – Art by Gennady Stolyarov II

Abstract Orderism Fractal 75 – by Gennady Stolyarov II

Note: Left-click on this image to get a full view of this digital work of fractal art.

In this fractal, translucent neon filaments coalesce into macro-spirals.

This digital artwork was created by Mr. Stolyarov in Apophysis, a free program that facilitates deliberate manipulation of randomly generated fractals into intelligible shapes.

This fractal is an extension of Mr. Stolyarov’s artistic style of Abstract Orderism, whose goal is the creation of abstract objects that are appealing by virtue of their geometric intricacy — a demonstration of the order that man can both discover in the universe and bring into existence through his own actions and applications of the laws of nature.

Fractal art is based on the idea of the spontaneous order – which is pivotal in economics, culture, and human civilization itself. Now, using computer technology, spontaneous orders can be harnessed in individual art works as well.

See the index of Mr. Stolyarov’s art works.

Abstract Orderism Fractal 74 – Art by Gennady Stolyarov II

Abstract Orderism Fractal 74 – Art by Gennady Stolyarov II

Abstract Orderism Fractal 74 – by Gennady Stolyarov II

Note: Left-click on this image to get a full view of this digital work of fractal art.

This fractal is an assembly of translucent strands and layers, with a bit of experimentation with colorful gradients.

This digital artwork was created by Mr. Stolyarov in Apophysis, a free program that facilitates deliberate manipulation of randomly generated fractals into intelligible shapes.

This fractal is an extension of Mr. Stolyarov’s artistic style of Abstract Orderism, whose goal is the creation of abstract objects that are appealing by virtue of their geometric intricacy — a demonstration of the order that man can both discover in the universe and bring into existence through his own actions and applications of the laws of nature.

Fractal art is based on the idea of the spontaneous order – which is pivotal in economics, culture, and human civilization itself. Now, using computer technology, spontaneous orders can be harnessed in individual art works as well.

See the index of Mr. Stolyarov’s art works.

The Overuse of Mathematics in Economics – Article by Luka Nikolic

The Overuse of Mathematics in Economics – Article by Luka Nikolic

Luka Nikolic
September 2, 2019

***************************

If you enrolled at university today, you would find economics modules filled with mathematics and statistics to explain economic phenomena. There would also be next to no philosophy, law, or history, all of which are much more important to understanding the way our world works and how it impacts the economy.

The reason is that since the end of the 19th century, there has been a push toward turning economics into a science—like physics or chemistry. Much of this has been done by quantifying phenomena and explaining it through graphs. It has been precisely since this shift that there has been such a poor track record of public policy, from fiscal to monetary.

What many contemporary economists fail to realize is that economics is as much of a philosophical pursuit as a mathematical one, if not more so.

Modern economics was first introduced as a formal subject called “history and political economy” in 1805. Economics was a three-decade-old discipline then, as Adam Smith had published his Wealth of Nations in 1776. The earliest economists were philosophers who used deduction and logic to explain the market. Smith deployed numerical analysis only as a means of qualitatively assessing government policies such as legislated grain prices and their impact. No graphs or equations were used.

Even earlier, 17th-century philosopher John Locke contributed more to economic liberty than any mathematician has since. Likewise, philosopher David Hume successfully explained the impact of free trade with his price-specie flow mechanism theory, which employs pure logic. John Stuart Mill’s book On Liberty likewise furthered the cause for free markets without using math.

In 1798, Malthus mathematically predicted mass starvation due to population growth, but he could not quantify the rule of law and free markets.

The first substantial misuse of mathematics was by Thomas Malthus. In 1798. He predicted mass starvation due to population growth, which was exponential and outpacing agricultural production, which was arithmetic. Malthus was evidently wrong, as contemporary free-market Japan’s population density towers over collectivist sub-Saharan’s Africa. Malthus could not quantify the rule of law and free markets.

Alfred Marshall’s Principles of Economics (1890) was the first groundbreaking textbook to use equations and graphs. One of Marshall’s students, John Maynard Keynes, would further the cause of quantifying economics by mathematically linking income and expenditure and how government policy could impact this. Keynes’ General Theory (1936) would serve as a blueprint for 20th-century economic policy as more scientific methods of economics gained favor in the coming decades. Friedrich Hayek summarized this shift in his Nobel Prize acceptance speech.

It seems to me that this failure of the economists to guide policy more successfully is closely connected with their propensity to imitate as closely as possible the procedures of the physical sciences—an attempt which in our field may lead to outright error. It is an approach which has come to be described as the “scientistic” attitude—an attitude which is decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed.

It is impossible to quantify human action. Although equations, such as utility measures, do exist to quantify human behavior, they are faulty when examined. How can an equation tell me when I am no longer satisfied with a certain good? Mathematically speaking, it is when marginal utility becomes negative. This may be true. However, the problem is how to determine how much chocolate will give me a stomach ache—mathematically speaking, what amount will produce negative marginal utility. A doctor could not figure this out, let alone an economist.

There cannot be “catch-all” formulas due to the complexity of economic phenomena. Measuring the elasticity of demand for a certain good is at best a contribution to economic history. Elasticity will hardly be constant in the same country throughout time, let alone in other countries. However, the economists pursuing this analysis do not do it to update economic history—it is done for the purpose of having government micromanage demand for these goods. In reality, government should allow the free market to produce a certain good. The market will determine the demand/supply.

Economics is more related to jurisprudence than math.

Economics, among other things, is the study of the allocation of scarce resources. If there is a limit of a certain good, it’s not the government’s job to utilize an equation to distribute it. Rather, governments must ensure that the property rights of that good are clearly defined. It is then up to the person who owns the good to allocate it. As such, economics is more related to jurisprudence than math.

The Solow-Swan growth model is a perfect example of quantifying economics. It claims to explain long-run economic growth based on productivity, capital accumulation, and other variables. It is unquestionable that these factors impact growth, however, it oversimplifies the complex interactions between various qualitative factors.

For example, English Common Law has allowed countries such as the US or Hong Kong to prosper more than African nations with no basis for the rule of law and where corruption is still widespread. Protestant nations were historically more favorable toward capitalism compared to other religions. Both of these factors undoubtedly affected the variables in the Solow-Swan model—the problem is quantifying them. Productivity and capital accumulation do not “just happen.”

Monetary policy has suffered the worst. Today, central banks manipulate interest rates to stimulate the economy due to a false belief in purely theoretical mathematical models. Such sophisticated analysis would be welcoming if it offered a better track record. By artificially lowering interest rates, central banks create malinvestment in the economy, creating a bubble.

Once the economy is deemed to be “overheating,” the rates are raised, causing the bubble to burst. This is precisely what has happened since the introduction of discretionary monetary policy in many instances. The 2008 crisis is the most recent example.

However, such policy was not possible with the gold standard because there was no need for a central bank nor monetary policy, as a tool, to even exist. Likewise, the economy was much more stable. Why did gold work? It could not be manipulated easily by the government, and furthermore, it was spontaneously chosen by people because it fulfilled the necessary criteria. Mathematical formulas cannot replicate this. One economist jokingly described it:

Instead of trading away your valuable pigs for horses, why not accept some smooth stones? Don’t worry that you don’t want them, someone else will give you horses in exchange for them! If we could just all agree on which smooth stones are valuable, we’d all be so much better off!

While serving as Hong Kong’s financial secretary from 1961 to 1971, John Cowperthwaite was skeptical about government collecting statistics outside what was necessary, claiming, “If I let them compute those statistics, they’ll want to use them for planning!” Hong Kong remains one of the richest and freest economies.

It should be recognized that mathematically-driven economics is a divergence from the foundation of traditional economics.

Sadly, Cowperthwaite’s skepticism of central planning based on models is rarely heeded today, evidenced by the Keynesianism that has reemerged in the intellectual sphere. Furthermore, considering that publishing in mathematically-driven economics journals is needed to secure tenure, it is questionable whether mainstream economics will be changed by such incentives.

Mathematics has a place at best for budgets and debt servicing—but it should be recognized that mathematically-driven economics is a divergence from the foundation of traditional economics.

Abstract Orderism Fractal 73 – Art by Gennady Stolyarov II

Abstract Orderism Fractal 73 – Art by Gennady Stolyarov II

Abstract Orderism Fractal 73 – by Gennady Stolyarov II

Note: Left-click on this image to get a full view of this digital work of fractal art.

This fractal is a set of nested, ornamented domes complemented by swirl patterns on both macro and micro scales.

This digital artwork was created by Mr. Stolyarov in Apophysis, a free program that facilitates deliberate manipulation of randomly generated fractals into intelligible shapes.

This fractal is an extension of Mr. Stolyarov’s artistic style of Abstract Orderism, whose goal is the creation of abstract objects that are appealing by virtue of their geometric intricacy — a demonstration of the order that man can both discover in the universe and bring into existence through his own actions and applications of the laws of nature.

Fractal art is based on the idea of the spontaneous order – which is pivotal in economics, culture, and human civilization itself. Now, using computer technology, spontaneous orders can be harnessed in individual art works as well.

See the index of Mr. Stolyarov’s art works.

Elevated Fractal City III – Art by Gennady Stolyarov II

Elevated Fractal City III – Art by Gennady Stolyarov II

Elevated Fractal City III – by Gennady Stolyarov II

Note: Left-click on this image to get a full view of this digital work of fractal art.

“Elevated Fractal City III” depicts an angular, luminous outpost in the night on a befogged world. Even such less hospitable alien worlds will one day be colonized by our civilization, and the colonists will build their own amenities.

This digital artwork was created by Mr. Stolyarov in Apophysis, a free program that facilitates deliberate manipulation of randomly generated fractals into intelligible shapes.

This fractal is an extension of Mr. Stolyarov’s artistic style of Abstract Orderism, whose goal is the creation of abstract objects that are appealing by virtue of their geometric intricacy — a demonstration of the order that man can both discover in the universe and bring into existence through his own actions and applications of the laws of nature.

Fractal art is based on the idea of the spontaneous order – which is pivotal in economics, culture, and human civilization itself. Now, using computer technology, spontaneous orders can be harnessed in individual art works as well.

See the index of Mr. Stolyarov’s art works.

Stellar Infrastructure – Fractal Art by Gennady Stolyarov II

Stellar Infrastructure – Fractal Art by Gennady Stolyarov II

Stellar Infrastructure – by Gennady Stolyarov II

Note: Left-click on this image to get a full view of this digital work of fractal art.

“Stellar Infrastructure” anticipates an era when civilization will extend to multiple star systems, which will be regularly traversed and connected by means of technological orders far exceeding humankind’s current abilities.

This digital artwork was created by Mr. Stolyarov in Apophysis, a free program that facilitates deliberate manipulation of randomly generated fractals into intelligible shapes.

This fractal is an extension of Mr. Stolyarov’s artistic style of Abstract Orderism, whose goal is the creation of abstract objects that are appealing by virtue of their geometric intricacy — a demonstration of the order that man can both discover in the universe and bring into existence through his own actions and applications of the laws of nature.

Fractal art is based on the idea of the spontaneous order – which is pivotal in economics, culture, and human civilization itself. Now, using computer technology, spontaneous orders can be harnessed in individual art works as well.

See the index of Mr. Stolyarov’s art works.

Fractal of 85 – Art by Gennady Stolyarov II

Fractal of 85 – Art by Gennady Stolyarov II

Fractal of 85 – by G. Stolyarov II

Note: Left-click on this image to get a full view of this digital work of fractal art.

This fractal was created by Gennady Stolyarov II as a present from one mathematician to another, based on 85-fold rotational symmetry for the 85th Birthday of his grandfather and namesake, Gennady Stolyarov I, on October 24, 2018. Notice how the rings can continue to stack along their orbit.

Gennady Stolyarov I, upon seeing the fractal, remarked that it illustrated to him that an entire lifetime is indeed long. However, in the view of Gennady Stolyarov II, it should be made even longer!

This digital artwork was created by Mr. Stolyarov in Apophysis, a free program that facilitates deliberate manipulation of randomly generated fractals into intelligible shapes.

This fractal is an extension of Mr. Stolyarov’s artistic style of Abstract Orderism, whose goal is the creation of abstract objects that are appealing by virtue of their geometric intricacy — a demonstration of the order that man can both discover in the universe and bring into existence through his own actions and applications of the laws of nature.

Fractal art is based on the idea of the spontaneous order – which is pivotal in economics, culture, and human civilization itself. Now, using computer technology, spontaneous orders can be harnessed in individual art works as well.

See the index of Mr. Stolyarov’s art works.

What Are the Chances That a Muslim Is a Terrorist? – Article by Sanford Ikeda

What Are the Chances That a Muslim Is a Terrorist? – Article by Sanford Ikeda

The New Renaissance HatSanford Ikeda
******************************
It’s flu season and for the past two days you’ve had a headache and sore throat. You learn that 90% of people who actually have the flu also have those symptoms, which makes you worry.  Does that mean the chances of your having the flu is 90%?  In other words, if there’s a 90% chance of having a headache and sore throat given that you have the flu, does that mean there’s a 90% chance having the flu given that you have a headache and sore throat?We can use symbols to express this question as follows: Pr(Flu | Symptoms) = Pr(Symptoms | Flu) = 90%?

The answer is no. Why?

If you think about it you’ll realize that there are other things besides the flu that can give you a combination of a headache and sore throat, such as a cold or an allergy, so that having those symptoms is certainly not the same thing as having the flu.  Similarly, while fire produces smoke, the old saying that “where there’s smoke there’s fire” is wrong because it’s quite possible to produce smoke without fire.

Fortunately, there’s a nice way to account for this.

How Bayes’ Theorem Works

Suppose you learn that, in addition to Pr(Symptoms | Flu) = 90%, that the probability of a randomly chosen person having a headache and sore throat this season, regardless of the cause, is 10% – i.e. Pr(Symptoms) = 10% – and that only one person in 100 will get the flu this season – i.e. Pr(Flu) = 1%.  How does this information help?

Again, what we want to know are the chances of having the flu, given these symptoms Pr(Flu | Symptom).  To find that we’ll need to know first the probability of having those symptoms if we have the flu (90%) times the probability of having the flu (1%).  In other words, there’s a 90% chance of having those symptoms if in fact we do have the flu, and the chances of having the flu is only 1%. That means Pr(Symptoms | Flu) x Pr(Flu) = 0.90 x 0.01 = 0.009 or 0.9% or a bit less than one chance in 100.

Finally, we need to divide that result by the probability of having a headache and sore throat regardless of the cause Pr(Symptoms), which is 10% or 0.10, because we need to know if your headache and sore throat are flu Symptoms out of all headache-and-sore symptoms that have occurred.

So, putting it all together, the answer to the question, “What is the probability that your Symptoms are caused by the Flu?” is as follows:

Pr(Flu | Symptoms) = [Pr(Symptoms | Flu) x Pr(Flu)] ÷ Pr(Symptoms) = 0.90 x 0.01 ÷ 0.10 = 0.09 or 9%.

So if you have a headache and sore throat there’s only a 9% chance, not 90%, that you have the flu, which I’m sure will come as a relief!

This particular approach to calculating “conditional probabilities” is called Bayes’ Theorem, after Thomas Bayes, the 18th century Presbyterian minister who came up with it. The example above is one that I got out this wonderful little book.

Muslims and Terrorism

Now, according to some sources (here and here), 10% of Terrorists are Muslim. Does this mean that there’s a 10% chance that a Muslim person you meet at random is a terrorist?  Again, the answer is emphatically no.

To see why, let’s apply Bayes’ theorem to the question, “What is the probability that a Muslim person is a Terrorist?” Or, stated more formally, “What is the probability that a person is a Terrorist, given that she is a Muslim?” or Pr(Terrorist | Muslim)?

Let’s calculate this the same way we did for the flu using some sources that I Googled and that appeared to be reliable.  I haven’t done a thorough search, however, so I won’t claim my result here to be anything but a ballpark figure.

So I want to find Pr(Terrorist | Muslim), which according to Bayes’ Theorem is equal to…

1) Pr(Muslim | Terrorist):  The probability that a person is a Muslim given that she’s a Terrorist is about 10% according to the sources I cited above, which report that around 90% of Terrorists are Non-Muslims.

Multiplied by…

2) Pr(Terrorist):  The probability that someone in the United States is a Terrorist of any kind, which I calculated first by taking the total number of known terrorist incidents in the U.S. back through 2000 which I tallied as 121 from this source  and as 49 from this source. At the risk of over-stating the incidence of terrorism, I took the higher figure and rounded it to 120.  Next, I multiplied this times 10 under the assumption that on average 10 persons lent material support for each terrorist act (which may be high), and then multiplied that result by 5 under the assumption that only one-in-five planned attacks are actually carried out (which may be low).  (I just made up these multipliers because the data are hard to find and these numbers seem to be at the higher and lower ends of what is likely the case and I’m trying to make the connection as strong as I can; but I’m certainly willing to entertain evidence showing different numbers.)  This equals 6,000 Terrorists in America between 2000 and 2016, which assumes that no person participated in more than one terrorist attempt (not likely) and that all these persons were active terrorists in the U.S. during those 17 years (not likely), all of which means 6,000 is probably an over-estimate of the number of Terrorists.

If we then divide 6,000 by 300 million people in the U.S. during this period (again, I’ll over-state the probability by not counting tourists and visitors) that gives us a Pr(Terrorist) = 0.00002 or 0.002% or 2 chances out of a hundred-thousand.

Now, divide this by…

3) The probability that someone in the U.S. is a Muslim, which is about 1%.

Putting it all together gives the following:

Pr(Terrorist | Muslim) = [Pr(Muslim | Terrorist) x Pr(Terrorist)] ÷ Pr(Muslim) = 10% x 0.002% ÷ 1% = 0.0002 or 0.02%.

One interpretation of this result is that the probability that a Muslim person, whom you encounter at random in the U.S., is a terrorist is about 1/50th of one-percent. In other words, around one in 5,000 Muslim persons you meet at random is a terrorist.  And keep in mind that the values I chose to make this calculation deliberately over-state, probably by a lot, that probability, so that the probability that a Muslim person is a Terrorist is likely much lower than 0.02%.

Moreover, the probability that a Muslim person is a Terrorist (0.002%) is 500 times lower than the probability that a Terrorist is a Muslim (10%).

(William Easterly of New York University applies Bayes’ theorem to the same question, using estimates that don’t over-state as much as mine do, and calculates the difference not at 500 times but 13,000 times lower!)

Other Considerations

As low as the probability of a Muslim person being a Terrorist is, the same data do indicate that a Non-Muslim person is much less likely to be a Terrorist.  By substituting values where appropriate – Pr(Non-Muslim | Terrorist) = 90% and Pr(Non-Muslim) = 99% – Bayes’ theorem gives us the following:

Pr(Terrorist | Non-Muslim) = [Pr(Non-Muslim | Terrorist) x Pr(Terrorist) ÷ Pr(Non-Muslim) = 90% x 0.002% ÷ 99% = 0.00002 or 0.002%.

So one interpretation of this is that a randomly chosen Non-Muslim person is around one-tenth as likely to be a Terrorist than a Muslim person (i.e. 0.2%/0.002%).  Naturally, the probabilities will be higher or lower if you’re at a terrorist convention or at an anti-terrorist peace rally; or if you have additional data that further differentiates among various groups – such as Wahhabi Sunni Muslims versus Salafist Muslim or Tamil Buddhists versus Tibetan Buddhists – the results again will be more accurate.

But whether you’re trying to educate yourself about the flu or terrorism, common sense suggests using relevant information as best you can. Bayes’ theorem is a good way to do that.

(I wish to thank Roger Koppl for helping me with an earlier version of this essay. Any remaining errors, however, are mine, alone.)

Sanford (Sandy) Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author. Read the original article.

How Not To Waste Your Vote: A Mathematical Analysis – Article by Stephen Weese

How Not To Waste Your Vote: A Mathematical Analysis – Article by Stephen Weese

The New Renaissance HatStephen Weese
******************************

During this especially contested election, a lot of people are talking about people “wasting” or “throwing away” votes. However, many people who say this do not have a complete grasp of the full mathematical picture – or worse, they are only mentioning the part that supports their position. First let’s define what a “wasted” vote is.

Mathematical Definition of Wasted Votes

A wasted vote is a vote that provides no determination or effect on the final outcome of the election. According to Wikipedia: “Wasted votes are votes cast for losing candidates or votes cast for winning candidates in excess of the number required for victory. For example, in the UK general election of 2005, 52% of votes were cast for losing candidates and 18% were excess votes – a total of 70% wasted votes.”

There are two kinds of wasted votes that mathematically have no effect on the final election:

  1. Votes cast for candidates who did not win
  2. Excess votes cast for winning candidates

Clearly, neither of these kinds of votes statistically affect the election. However, many arguments only mention the first type without mentioning the second. Mathematically and logically, both categories are ineffectual votes.

First Past the Post

The value of your vote is what you give it. Should you spend it on a candidate you don’t believe in?The United States, along with several other nations, uses the First Past the Post (FPTP) or “winner take all” election. This method is defined as “the candidate who receives more votes than any other candidate wins.”

This is one of the reasons that many people mention wasted votes – our system creates that result. Sociologically speaking, the FPTP system tends to favor a two-party system. The French sociologist Maurice Duverger created “Duverger’s Law” which says just that.

The Electoral College

For U.S. Presidential elections, a state-by-state system is used called the Electoral College. Each state gets a proportional amount of electoral votes which are then used to find a majority for president. Interestingly, what happens in each separate state is a smaller FPTP election, followed by a counting of electoral votes.

The Electoral College is slightly different from a pure FPTP system because it requires an actual number threshold (currently 270 electoral votes) for a candidate to win instead of a simple majority of the votes.

We can sum things up as follows:

  1. States hold “winner take all” FPTP elections for electoral votes
  2. Electoral votes are counted
  3. The winner must have 270 electoral votes
  4. If there is no candidate that reaches it, the House of Representatives chooses the president

These distinctions are important, because they can change the math and the concept of the “wasted” vote phenomenon.

Wasted Votes in Presidential Elections

The general concept that is proposed by many is that you must always vote for a Republican or a Democrat because you must stop the worst candidate from winning. In a sense, you are voting a negative vote – against someone – rather than for a candidate of your choice. However, this actually depends on the scenario of the vote. Let’s look at some examples.

Bush vs. Gore: 2000

People voting out of fear of the worst candidate is a self-perpetuating cycle. Let’s examine a common example used in this discussion.

Following the extremely close 2000 U.S. presidential election, some supporters of Democratic candidate Al Gore believe that one reason he lost the election to Republican George W. Bush is because a portion of the electorate (2.7%) voted for Ralph Nader of the Green Party, and exit polls indicated that more of these voters would have preferred Gore (45%) to Bush (27%), with the rest not voting in Nader’s absence.

The argument for this case is even more pronounced because the election was ultimately decided on the basis of the election results in Florida where Bush prevailed over Gore by a margin of only 537 votes (0.009%), which was far exceeded by the number of votes, 97,488 (0.293%), that Nader received. (Wikipedia)

At first, this may look like a clear example of the need to vote for a major party. However, let’s break this situation down mathematically. In every single state election, Bush or Gore won. There were millions of mathematically wasted votes in this election of both types.

In California, Gore won by 1,293,774 votes. Mathematically speaking, there were over one million wasted votes for Gore in this state alone. None of these excess votes could have helped Gore, since he had already mathematically won the state. The California votes didn’t matter in Florida. In fact, the votes in Florida have much more relevance than any other state.

Conclusions: Sometimes a vote for a major party winner is wasted anyway. Sometimes everything will come down to one state. However, there is no way to predict in advance which votes will be this important. If the parties knew that Florida would have been the deal breaker, then they would have acted differently. However, we simply don’t know the future well enough to predict that.

We do know that battleground states are generally more important than “safe” states for each candidate, but it is hard to know exactly which state might matter. (There are plenty of scenarios you can research online about possibly electoral outcomes, I encourage you to do so.) This leads us into our next example.

Clinton vs. Trump 2016

Let’s do some math about the state of California and our current presidential election. The average RCP poll has Hillary Clinton ahead by 22.2 percent. The registered voters in California add up to 17.7 million. Not all of them will vote, but we can use the 2012 presidential election as a predictor, where 13.2 million people voted.

Out of those 13.2 million, according to current predictions, 52.6% will vote for Clinton. However, Clinton only needs about 31% to beat Trump. The other 21% of excess votes for Clinton will be wasted. This means that approximately 3 million votes for Clinton in California will be wasted. Now, this is only a mathematical model, but we have several reasons to believe in it.

  1. California has a history of being a heavily Democratic state
  2. Polls usually swing within a single digit margin of error
  3. 21% is quite a large margin of leeway

Even if the polling changes significantly, we are still looking at millions of wasted Clinton votes in California.

Now let’s throw Jill Stein into the math. As part of the Green Party, she is to the left politically of Hillary, so we will assume that votes for her will be taken from Clinton’s pool. (Though this isn’t always a true assumption, as we will see later.) Right now she is polling at around 4%, but we could even give her 5%. If you take away 5% from Hillary’s margin of 22.2%, that leaves a huge margin of 17.2%: still millions of votes. The takeaway from this: you can safely vote for Jill Stein in California without fear of changing the state election results. Therefore, it will not affect the national vote either.

Since we have the Electoral College, your votes will have no influence beyond the state to change other vote counts. Those who prefer Jill Stein can with a clear conscience vote for her, since it will make no difference mathematically. Later we will look at the ethics of voting as it relates to this math.

Mathematical Importance of a Single Vote

There are a few theories on voting power calculations; we will look at two of them here. John F. Banzhaf III created a probabilistic system for determining individual voting power in a block voting system, such as the Electoral College. According to his calculations, because of differences in each state, it gives different voters different amounts of “voting power.”

A computer science researcher at UNC ran the Banzhaf power numbers for the 1990 U.S. Presidential election and determined that the state of California had the voters with the highest power index: 3.3. This index is measured as a multiple of the weakest voting state, which was Montana (1.0 voting power).

A newer method of measuring voting power was created by a research team from Columbia University using a more empirical (based on existing data) and less randomized model. They concluded that the smaller states had more mathematical voting power due to the fact that they received 2 votes minimum as a starting point. This model tends to generate smaller multipliers for voting power but more accurately matches empirical data from past elections.

Using these power ratings as a guide, we can estimate an estimated maximum voting power for each vote. We will be making some assumptions for this calculation.

  1. The minimum voting power multiplier is 1
  2. The highest multiplier from both models will be used as a maximum

Starting numbers

In the United States there are currently 218,959,000 eligible voters with 146,311,000 actual registered voters. In the 2012 Presidential election, 126,144,000 people actually voted. This is our voting pool.

Each vote, legally speaking, has the same weight. So if we start from that assumption, taking into account a probable amount of voters (126 million), the power of your vote is:

1
_____

126 million

This is: 0.0000000079 or 0.00000079%. That is the weight of your vote mathematically. Now we can multiply it by the highest power index to show the highest potential of your vote. Our California historical data from 1990 shows a 3.3 index, but to be conservative we will raise it to 4. So now the power is: 0.00000317%

Using probabilistic equations and analysis, this is the result. This is how powerful your vote is in the U.S. Presidential election is if you end up in the most heavily weighted state.

Addressing Weighted Vote Fallacies

As we have seen, many people argue that we should not “waste” votes, yet many millions of votes for the winner are wasted every year. It is difficult to predict whether a vote will end up in either wasted category. We’ve also seen past and possible scenarios where voting third party or major party can have no influence on the final election.

Fallacy 1: Treating Single Voters as One Block

A false assumption that people make about voting is treating a single vote as a block. For instance, let’s use our current election again as an example.

Someone insists that if you do not vote for Hillary, then you are helping Trump to be elected. (The reverse of this can also apply here.) You claim that you wish to vote for Gary Johnson, the Libertarian candidate. You’re then told that the current national poll with all parties shows that Johnson is polling at 7%, which is less than the difference between Clinton (39%) and Trump (40%). Therefore, you must vote for Clinton to make up that difference.

There are several problems with this proposal. It does not take each state into consideration. It assumes all Gary Johnson supporters have Clinton as their second choice. And it treats your single vote as the entire 7%.

As we have seen, the current picture in California shows that Clinton has a huge margin. If this voter lived in California, a vote for Gary Johnson would not help Trump and also would not hurt Hillary, even if the entire 7% voted for Johnson. Anyone who says it is your duty to vote negative in this scenario does not know the math of this state.

This also assumes that all Johnson votes would choose Hillary as the second choice, but given that Libertarians take some platform elements from both the Left and the Right, this assumption would be highly unlikely. The same would go for Trump.

When people look at the 7% and tell you that you must vote a certain way, it is assuming you will somehow influence the entire 7%. However, we have seen that you are just one voter, and that your voting power is a very tiny number by itself. You cannot be entirely responsible for a candidate winning or losing with your single vote. In theory, it’s mathematically possible for one vote to decide an election, but given there are an exponential number of possible scenarios with millions of voters (imagine raising a few million to an exponent), it’s astronomically unlikely, especially if you live in a non-battleground state.

It’s also astronomically unlikely that all 7% (8,820,000 people) would vote for who they polled for. Even if you gave each voter a 99% chance of voting for who they polled for, the chance that all of them would vote the way they polled is (0.99) to the power of 8,820,000, which is less than 0.000000000000000000000000000000000000000000000000001%

Individuals are not entire blocks of voters, and it’s problematic to treat them as such.

Fallacy 2: Third Party Votes Have No Value

If enough people vote their conscience and vote for what they believe in, things can change.On the surface, this might appear to be true. A third party candidate for President has never won an election. We also have Duverger’s law that states our FPTP favors two party systems. However, it is mathematically possible for a third party to win, and there are also other measurable gains for voting for a third party.

Part of this fallacy is the “winner take all” perspective. In other words, if you don’t win the presidency, you’ve wasted your time.

However, there are many benefits of voting third party, even for president. It makes a political statement to the majority parties. It helps local politicians of that party in elections. It can help change platforms to include third-party elements. And it provides recognition for the party among voters as a viable alternative.

Third party candidates can and have won local and state elections in the past. This is a fact.

In 1968, George Wallace ran as a third party option for President. He received nine million votes and 45 electoral votes. Though he did not expect to win the popular vote, one of his aims was to force the House of Representatives to choose the President by denying either candidate the 270 electoral votes needed to win – and he nearly succeeded. Since our system is not a true First Past the Post, but a hybrid, this kind of situation is possible. In fact, calculations have been done showing that Gary Johnson could in fact force that situation this year. It is very unlikely, but it is possible.

Regardless of his loss, the impact of the Wallace campaign was substantial. He was able to affect the dialogue and events of that election significantly. (This is meant in no way as an endorsement of George Wallace’s political positions.) If his supporters had mostly voted for a majority party, his impact would have been less significant.

In most scenarios given by the “wasted” vote crowd, all of the votes that are considered are ones from the current voting electorate. Yet we have seen from figures previously mentioned that over 50 million eligible voters are not registered. Even among registered voters, almost 20 million didn’t vote in the last election. These potential votes are never placed into the scenario.

The simple truth is, there are millions of uninterested voters out there, yet candidates are not inspiring them to vote. If candidate X or Y were truly worthy of votes, would not some of these voters decide to register? And another question, would it be easier to get a third party voter to choose a majority candidate, or a non-voter? These are not mathematical questions, but they are logical. The fact is, with this many votes at stake, if these non-voters could be encouraged to register, they would undoubtedly change the election as they make up one-third of total eligible voters.

Ethics and Math

It has been demonstrated that the potential individual power of a vote is mathematically very small. It also has been shown that wasted votes can be cast for the winner of an election as well as the losers, as well as demonstrating that it is sometimes hard to predict exactly which vote will be wasted. Given this information, where do we derive the value of a vote?

It’s hard to get it purely from the math or practicality. In fact, it would seem our single vote is of very little import at all. Therefore, we must find meaning and value for our votes outside of the math.

Certainly, the Founders never envisioned an endless cycle of US citizens voting for the “lesser of two evils.”Certainly, the Founders never envisioned an endless cycle of United States citizens voting for the “lesser of two evils,” as the argument is often presented. The idea was for free and open elections where the people’s voice would be heard. It was simple: the candidate who best represented your interests earned your vote.

Your vote is, therefore, an expression of yourself and your beliefs. Your vote has power as a statement. People voting out of fear of the worst candidate is a self-perpetuating cycle. If no one ever has the courage to vote outside of the two main parties, it will never be broken. However, if enough people vote and it shows in the total election count, it will give cause for us to reconsider and embolden even more to vote outside of the two parties.

Yes, our current electoral system has some serious mathematical flaws. It simply does not encourage people to vote for their conscience – but we have seen that things are not as bad as we would be led to believe by some. The true value of a vote is in the people.

The Value of Your Vote

The value of your vote is what you give it. Should you spend it on a candidate you don’t believe in? Should it be an exercise in fear? It’s up to you. It is my hope that these mathematical calculations will bring you freedom from the idea that only majority party votes matter. A vote is a statement, a vote is personal, a vote is an expression of your citizenship in this country. If enough people vote their conscience and vote for what they believe in, things can change.

If you are already a staunch supporter of a major party, then you should vote that way. This paper is not against the major parties at all – but rather against the concept that votes somehow “belong” to only Democrats or Republicans. Votes belong to the voter. There has never been a more important time to vote your conscience.

Stephen_WeeseStephen Weese

Stephen Weese has an undergraduate degree in Computer Science from George Mason University, and a Masters in Computer Information Technology from Regis University. Stephen teaches college Math and Computer courses. He is also a speaker, a film and voice actor, and a nutrition coach.

This article was originally published on FEE.org. Read the original article.

Publication of “Practice Problems in Advanced Topics in General Insurance” – ACTEX Study Guide by G. Stolyarov II

Publication of “Practice Problems in Advanced Topics in General Insurance” – ACTEX Study Guide by G. Stolyarov II

Practice Problems in Advanced Topics in General Insurance

***

Written by Gennady Stolyarov II, ASA, ACAS, MAAA, CPCU, ARe, ARC, API, AIS, AIE, AIAF

***

Published by ACTEX Publications
***

1st Edition: Spring 2016

 

Students preparing for Society of Actuaries Exam GIADV: Advanced Topics in General Insurance will benefit from Mr. Stolyarov’s latest book, Practice Problems in Advanced Topics in General Insurance. Three options are available for purchase.

ACTEX GIADV Study Guide Cover
Hard-Copy/Electronic Bundle  https://www.actexmadriver.com/product.aspx?id=453107178
Hard Copy  https://www.actexmadriver.com/product.aspx?id=453107176
Electronic  https://www.actexmadriver.com/product.aspx?id=453107177

Comments from the Author: This book of practice problems is the most comprehensive culmination of my efforts to date, and I am pleased to have the opportunity to work with ACTEX Publications to bring all of these resources to candidates in one convenient compilation so that they will spend less time gathering problems from many separate sources. The Spring 2016 edition of this book is approximately 400 pages long and includes 613 practice problems and full solutions. 531 of the problems/solutions are original creations of mine.

This book is structured to align precisely with the five syllabus topics and eight syllabus papers (including the Lee paper, new on the Spring 2016 Exam GIADV syllabus) – each of which has a section of problems devoted to it. The following is a summary breakdown of what you will find:

  Problems by Source
Section (and Syllabus Paper) Original SOA CAS Total
1 (Mack) 21 5 5 31
2 (Venter) 22 4 5 31
3 (Clark LDF) 60 4 6 70
4 (Marshall et al.) 103 4 4 111
5 (Lee) 44 0 12 56
6 (Clark Reinsurance) 139 8 9 156
7 (D’Arcy / Dyer) 99 4 6 109
8 (Mango) 43 4 2 49
TOTAL 531 33 49 613

 

Each section presents all of the problems in succession, followed by the solutions at the end. You are encouraged to attempt each problem on your own and write down or type your solution, and then look at the answer key for step-by-step explanation and/or calculations. As this book is a learning tool, I have provided relevant citations from the syllabus readings for many of the practice problems. Also, I am not an advocate of leaving any problems as unexplained “exercises to the reader.” While each of these problems is intended to be an exercise for you, this book’s purpose is to show you how they can be solved as well – so give each of them your best attempt, but know that detailed answers are available for you to check your work and fill in any gaps that may have prevented you from solving a problem yourself.