Browsed by
Tag: probability

What Are the Chances That a Muslim Is a Terrorist? – Article by Sanford Ikeda

What Are the Chances That a Muslim Is a Terrorist? – Article by Sanford Ikeda

The New Renaissance HatSanford Ikeda
******************************
It’s flu season and for the past two days you’ve had a headache and sore throat. You learn that 90% of people who actually have the flu also have those symptoms, which makes you worry.  Does that mean the chances of your having the flu is 90%?  In other words, if there’s a 90% chance of having a headache and sore throat given that you have the flu, does that mean there’s a 90% chance having the flu given that you have a headache and sore throat?We can use symbols to express this question as follows: Pr(Flu | Symptoms) = Pr(Symptoms | Flu) = 90%?

The answer is no. Why?

If you think about it you’ll realize that there are other things besides the flu that can give you a combination of a headache and sore throat, such as a cold or an allergy, so that having those symptoms is certainly not the same thing as having the flu.  Similarly, while fire produces smoke, the old saying that “where there’s smoke there’s fire” is wrong because it’s quite possible to produce smoke without fire.

Fortunately, there’s a nice way to account for this.

How Bayes’ Theorem Works

Suppose you learn that, in addition to Pr(Symptoms | Flu) = 90%, that the probability of a randomly chosen person having a headache and sore throat this season, regardless of the cause, is 10% – i.e. Pr(Symptoms) = 10% – and that only one person in 100 will get the flu this season – i.e. Pr(Flu) = 1%.  How does this information help?

Again, what we want to know are the chances of having the flu, given these symptoms Pr(Flu | Symptom).  To find that we’ll need to know first the probability of having those symptoms if we have the flu (90%) times the probability of having the flu (1%).  In other words, there’s a 90% chance of having those symptoms if in fact we do have the flu, and the chances of having the flu is only 1%. That means Pr(Symptoms | Flu) x Pr(Flu) = 0.90 x 0.01 = 0.009 or 0.9% or a bit less than one chance in 100.

Finally, we need to divide that result by the probability of having a headache and sore throat regardless of the cause Pr(Symptoms), which is 10% or 0.10, because we need to know if your headache and sore throat are flu Symptoms out of all headache-and-sore symptoms that have occurred.

So, putting it all together, the answer to the question, “What is the probability that your Symptoms are caused by the Flu?” is as follows:

Pr(Flu | Symptoms) = [Pr(Symptoms | Flu) x Pr(Flu)] ÷ Pr(Symptoms) = 0.90 x 0.01 ÷ 0.10 = 0.09 or 9%.

So if you have a headache and sore throat there’s only a 9% chance, not 90%, that you have the flu, which I’m sure will come as a relief!

This particular approach to calculating “conditional probabilities” is called Bayes’ Theorem, after Thomas Bayes, the 18th century Presbyterian minister who came up with it. The example above is one that I got out this wonderful little book.

Muslims and Terrorism

Now, according to some sources (here and here), 10% of Terrorists are Muslim. Does this mean that there’s a 10% chance that a Muslim person you meet at random is a terrorist?  Again, the answer is emphatically no.

To see why, let’s apply Bayes’ theorem to the question, “What is the probability that a Muslim person is a Terrorist?” Or, stated more formally, “What is the probability that a person is a Terrorist, given that she is a Muslim?” or Pr(Terrorist | Muslim)?

Let’s calculate this the same way we did for the flu using some sources that I Googled and that appeared to be reliable.  I haven’t done a thorough search, however, so I won’t claim my result here to be anything but a ballpark figure.

So I want to find Pr(Terrorist | Muslim), which according to Bayes’ Theorem is equal to…

1) Pr(Muslim | Terrorist):  The probability that a person is a Muslim given that she’s a Terrorist is about 10% according to the sources I cited above, which report that around 90% of Terrorists are Non-Muslims.

Multiplied by…

2) Pr(Terrorist):  The probability that someone in the United States is a Terrorist of any kind, which I calculated first by taking the total number of known terrorist incidents in the U.S. back through 2000 which I tallied as 121 from this source  and as 49 from this source. At the risk of over-stating the incidence of terrorism, I took the higher figure and rounded it to 120.  Next, I multiplied this times 10 under the assumption that on average 10 persons lent material support for each terrorist act (which may be high), and then multiplied that result by 5 under the assumption that only one-in-five planned attacks are actually carried out (which may be low).  (I just made up these multipliers because the data are hard to find and these numbers seem to be at the higher and lower ends of what is likely the case and I’m trying to make the connection as strong as I can; but I’m certainly willing to entertain evidence showing different numbers.)  This equals 6,000 Terrorists in America between 2000 and 2016, which assumes that no person participated in more than one terrorist attempt (not likely) and that all these persons were active terrorists in the U.S. during those 17 years (not likely), all of which means 6,000 is probably an over-estimate of the number of Terrorists.

If we then divide 6,000 by 300 million people in the U.S. during this period (again, I’ll over-state the probability by not counting tourists and visitors) that gives us a Pr(Terrorist) = 0.00002 or 0.002% or 2 chances out of a hundred-thousand.

Now, divide this by…

3) The probability that someone in the U.S. is a Muslim, which is about 1%.

Putting it all together gives the following:

Pr(Terrorist | Muslim) = [Pr(Muslim | Terrorist) x Pr(Terrorist)] ÷ Pr(Muslim) = 10% x 0.002% ÷ 1% = 0.0002 or 0.02%.

One interpretation of this result is that the probability that a Muslim person, whom you encounter at random in the U.S., is a terrorist is about 1/50th of one-percent. In other words, around one in 5,000 Muslim persons you meet at random is a terrorist.  And keep in mind that the values I chose to make this calculation deliberately over-state, probably by a lot, that probability, so that the probability that a Muslim person is a Terrorist is likely much lower than 0.02%.

Moreover, the probability that a Muslim person is a Terrorist (0.002%) is 500 times lower than the probability that a Terrorist is a Muslim (10%).

(William Easterly of New York University applies Bayes’ theorem to the same question, using estimates that don’t over-state as much as mine do, and calculates the difference not at 500 times but 13,000 times lower!)

Other Considerations

As low as the probability of a Muslim person being a Terrorist is, the same data do indicate that a Non-Muslim person is much less likely to be a Terrorist.  By substituting values where appropriate – Pr(Non-Muslim | Terrorist) = 90% and Pr(Non-Muslim) = 99% – Bayes’ theorem gives us the following:

Pr(Terrorist | Non-Muslim) = [Pr(Non-Muslim | Terrorist) x Pr(Terrorist) ÷ Pr(Non-Muslim) = 90% x 0.002% ÷ 99% = 0.00002 or 0.002%.

So one interpretation of this is that a randomly chosen Non-Muslim person is around one-tenth as likely to be a Terrorist than a Muslim person (i.e. 0.2%/0.002%).  Naturally, the probabilities will be higher or lower if you’re at a terrorist convention or at an anti-terrorist peace rally; or if you have additional data that further differentiates among various groups – such as Wahhabi Sunni Muslims versus Salafist Muslim or Tamil Buddhists versus Tibetan Buddhists – the results again will be more accurate.

But whether you’re trying to educate yourself about the flu or terrorism, common sense suggests using relevant information as best you can. Bayes’ theorem is a good way to do that.

(I wish to thank Roger Koppl for helping me with an earlier version of this essay. Any remaining errors, however, are mine, alone.)

Sanford (Sandy) Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author. Read the original article.

How Not To Waste Your Vote: A Mathematical Analysis – Article by Stephen Weese

How Not To Waste Your Vote: A Mathematical Analysis – Article by Stephen Weese

The New Renaissance HatStephen Weese
******************************

During this especially contested election, a lot of people are talking about people “wasting” or “throwing away” votes. However, many people who say this do not have a complete grasp of the full mathematical picture – or worse, they are only mentioning the part that supports their position. First let’s define what a “wasted” vote is.

Mathematical Definition of Wasted Votes

A wasted vote is a vote that provides no determination or effect on the final outcome of the election. According to Wikipedia: “Wasted votes are votes cast for losing candidates or votes cast for winning candidates in excess of the number required for victory. For example, in the UK general election of 2005, 52% of votes were cast for losing candidates and 18% were excess votes – a total of 70% wasted votes.”

There are two kinds of wasted votes that mathematically have no effect on the final election:

  1. Votes cast for candidates who did not win
  2. Excess votes cast for winning candidates

Clearly, neither of these kinds of votes statistically affect the election. However, many arguments only mention the first type without mentioning the second. Mathematically and logically, both categories are ineffectual votes.

First Past the Post

The value of your vote is what you give it. Should you spend it on a candidate you don’t believe in?The United States, along with several other nations, uses the First Past the Post (FPTP) or “winner take all” election. This method is defined as “the candidate who receives more votes than any other candidate wins.”

This is one of the reasons that many people mention wasted votes – our system creates that result. Sociologically speaking, the FPTP system tends to favor a two-party system. The French sociologist Maurice Duverger created “Duverger’s Law” which says just that.

The Electoral College

For U.S. Presidential elections, a state-by-state system is used called the Electoral College. Each state gets a proportional amount of electoral votes which are then used to find a majority for president. Interestingly, what happens in each separate state is a smaller FPTP election, followed by a counting of electoral votes.

The Electoral College is slightly different from a pure FPTP system because it requires an actual number threshold (currently 270 electoral votes) for a candidate to win instead of a simple majority of the votes.

We can sum things up as follows:

  1. States hold “winner take all” FPTP elections for electoral votes
  2. Electoral votes are counted
  3. The winner must have 270 electoral votes
  4. If there is no candidate that reaches it, the House of Representatives chooses the president

These distinctions are important, because they can change the math and the concept of the “wasted” vote phenomenon.

Wasted Votes in Presidential Elections

The general concept that is proposed by many is that you must always vote for a Republican or a Democrat because you must stop the worst candidate from winning. In a sense, you are voting a negative vote – against someone – rather than for a candidate of your choice. However, this actually depends on the scenario of the vote. Let’s look at some examples.

Bush vs. Gore: 2000

People voting out of fear of the worst candidate is a self-perpetuating cycle. Let’s examine a common example used in this discussion.

Following the extremely close 2000 U.S. presidential election, some supporters of Democratic candidate Al Gore believe that one reason he lost the election to Republican George W. Bush is because a portion of the electorate (2.7%) voted for Ralph Nader of the Green Party, and exit polls indicated that more of these voters would have preferred Gore (45%) to Bush (27%), with the rest not voting in Nader’s absence.

The argument for this case is even more pronounced because the election was ultimately decided on the basis of the election results in Florida where Bush prevailed over Gore by a margin of only 537 votes (0.009%), which was far exceeded by the number of votes, 97,488 (0.293%), that Nader received. (Wikipedia)

At first, this may look like a clear example of the need to vote for a major party. However, let’s break this situation down mathematically. In every single state election, Bush or Gore won. There were millions of mathematically wasted votes in this election of both types.

In California, Gore won by 1,293,774 votes. Mathematically speaking, there were over one million wasted votes for Gore in this state alone. None of these excess votes could have helped Gore, since he had already mathematically won the state. The California votes didn’t matter in Florida. In fact, the votes in Florida have much more relevance than any other state.

Conclusions: Sometimes a vote for a major party winner is wasted anyway. Sometimes everything will come down to one state. However, there is no way to predict in advance which votes will be this important. If the parties knew that Florida would have been the deal breaker, then they would have acted differently. However, we simply don’t know the future well enough to predict that.

We do know that battleground states are generally more important than “safe” states for each candidate, but it is hard to know exactly which state might matter. (There are plenty of scenarios you can research online about possibly electoral outcomes, I encourage you to do so.) This leads us into our next example.

Clinton vs. Trump 2016

Let’s do some math about the state of California and our current presidential election. The average RCP poll has Hillary Clinton ahead by 22.2 percent. The registered voters in California add up to 17.7 million. Not all of them will vote, but we can use the 2012 presidential election as a predictor, where 13.2 million people voted.

Out of those 13.2 million, according to current predictions, 52.6% will vote for Clinton. However, Clinton only needs about 31% to beat Trump. The other 21% of excess votes for Clinton will be wasted. This means that approximately 3 million votes for Clinton in California will be wasted. Now, this is only a mathematical model, but we have several reasons to believe in it.

  1. California has a history of being a heavily Democratic state
  2. Polls usually swing within a single digit margin of error
  3. 21% is quite a large margin of leeway

Even if the polling changes significantly, we are still looking at millions of wasted Clinton votes in California.

Now let’s throw Jill Stein into the math. As part of the Green Party, she is to the left politically of Hillary, so we will assume that votes for her will be taken from Clinton’s pool. (Though this isn’t always a true assumption, as we will see later.) Right now she is polling at around 4%, but we could even give her 5%. If you take away 5% from Hillary’s margin of 22.2%, that leaves a huge margin of 17.2%: still millions of votes. The takeaway from this: you can safely vote for Jill Stein in California without fear of changing the state election results. Therefore, it will not affect the national vote either.

Since we have the Electoral College, your votes will have no influence beyond the state to change other vote counts. Those who prefer Jill Stein can with a clear conscience vote for her, since it will make no difference mathematically. Later we will look at the ethics of voting as it relates to this math.

Mathematical Importance of a Single Vote

There are a few theories on voting power calculations; we will look at two of them here. John F. Banzhaf III created a probabilistic system for determining individual voting power in a block voting system, such as the Electoral College. According to his calculations, because of differences in each state, it gives different voters different amounts of “voting power.”

A computer science researcher at UNC ran the Banzhaf power numbers for the 1990 U.S. Presidential election and determined that the state of California had the voters with the highest power index: 3.3. This index is measured as a multiple of the weakest voting state, which was Montana (1.0 voting power).

A newer method of measuring voting power was created by a research team from Columbia University using a more empirical (based on existing data) and less randomized model. They concluded that the smaller states had more mathematical voting power due to the fact that they received 2 votes minimum as a starting point. This model tends to generate smaller multipliers for voting power but more accurately matches empirical data from past elections.

Using these power ratings as a guide, we can estimate an estimated maximum voting power for each vote. We will be making some assumptions for this calculation.

  1. The minimum voting power multiplier is 1
  2. The highest multiplier from both models will be used as a maximum

Starting numbers

In the United States there are currently 218,959,000 eligible voters with 146,311,000 actual registered voters. In the 2012 Presidential election, 126,144,000 people actually voted. This is our voting pool.

Each vote, legally speaking, has the same weight. So if we start from that assumption, taking into account a probable amount of voters (126 million), the power of your vote is:

1
_____

126 million

This is: 0.0000000079 or 0.00000079%. That is the weight of your vote mathematically. Now we can multiply it by the highest power index to show the highest potential of your vote. Our California historical data from 1990 shows a 3.3 index, but to be conservative we will raise it to 4. So now the power is: 0.00000317%

Using probabilistic equations and analysis, this is the result. This is how powerful your vote is in the U.S. Presidential election is if you end up in the most heavily weighted state.

Addressing Weighted Vote Fallacies

As we have seen, many people argue that we should not “waste” votes, yet many millions of votes for the winner are wasted every year. It is difficult to predict whether a vote will end up in either wasted category. We’ve also seen past and possible scenarios where voting third party or major party can have no influence on the final election.

Fallacy 1: Treating Single Voters as One Block

A false assumption that people make about voting is treating a single vote as a block. For instance, let’s use our current election again as an example.

Someone insists that if you do not vote for Hillary, then you are helping Trump to be elected. (The reverse of this can also apply here.) You claim that you wish to vote for Gary Johnson, the Libertarian candidate. You’re then told that the current national poll with all parties shows that Johnson is polling at 7%, which is less than the difference between Clinton (39%) and Trump (40%). Therefore, you must vote for Clinton to make up that difference.

There are several problems with this proposal. It does not take each state into consideration. It assumes all Gary Johnson supporters have Clinton as their second choice. And it treats your single vote as the entire 7%.

As we have seen, the current picture in California shows that Clinton has a huge margin. If this voter lived in California, a vote for Gary Johnson would not help Trump and also would not hurt Hillary, even if the entire 7% voted for Johnson. Anyone who says it is your duty to vote negative in this scenario does not know the math of this state.

This also assumes that all Johnson votes would choose Hillary as the second choice, but given that Libertarians take some platform elements from both the Left and the Right, this assumption would be highly unlikely. The same would go for Trump.

When people look at the 7% and tell you that you must vote a certain way, it is assuming you will somehow influence the entire 7%. However, we have seen that you are just one voter, and that your voting power is a very tiny number by itself. You cannot be entirely responsible for a candidate winning or losing with your single vote. In theory, it’s mathematically possible for one vote to decide an election, but given there are an exponential number of possible scenarios with millions of voters (imagine raising a few million to an exponent), it’s astronomically unlikely, especially if you live in a non-battleground state.

It’s also astronomically unlikely that all 7% (8,820,000 people) would vote for who they polled for. Even if you gave each voter a 99% chance of voting for who they polled for, the chance that all of them would vote the way they polled is (0.99) to the power of 8,820,000, which is less than 0.000000000000000000000000000000000000000000000000001%

Individuals are not entire blocks of voters, and it’s problematic to treat them as such.

Fallacy 2: Third Party Votes Have No Value

If enough people vote their conscience and vote for what they believe in, things can change.On the surface, this might appear to be true. A third party candidate for President has never won an election. We also have Duverger’s law that states our FPTP favors two party systems. However, it is mathematically possible for a third party to win, and there are also other measurable gains for voting for a third party.

Part of this fallacy is the “winner take all” perspective. In other words, if you don’t win the presidency, you’ve wasted your time.

However, there are many benefits of voting third party, even for president. It makes a political statement to the majority parties. It helps local politicians of that party in elections. It can help change platforms to include third-party elements. And it provides recognition for the party among voters as a viable alternative.

Third party candidates can and have won local and state elections in the past. This is a fact.

In 1968, George Wallace ran as a third party option for President. He received nine million votes and 45 electoral votes. Though he did not expect to win the popular vote, one of his aims was to force the House of Representatives to choose the President by denying either candidate the 270 electoral votes needed to win – and he nearly succeeded. Since our system is not a true First Past the Post, but a hybrid, this kind of situation is possible. In fact, calculations have been done showing that Gary Johnson could in fact force that situation this year. It is very unlikely, but it is possible.

Regardless of his loss, the impact of the Wallace campaign was substantial. He was able to affect the dialogue and events of that election significantly. (This is meant in no way as an endorsement of George Wallace’s political positions.) If his supporters had mostly voted for a majority party, his impact would have been less significant.

In most scenarios given by the “wasted” vote crowd, all of the votes that are considered are ones from the current voting electorate. Yet we have seen from figures previously mentioned that over 50 million eligible voters are not registered. Even among registered voters, almost 20 million didn’t vote in the last election. These potential votes are never placed into the scenario.

The simple truth is, there are millions of uninterested voters out there, yet candidates are not inspiring them to vote. If candidate X or Y were truly worthy of votes, would not some of these voters decide to register? And another question, would it be easier to get a third party voter to choose a majority candidate, or a non-voter? These are not mathematical questions, but they are logical. The fact is, with this many votes at stake, if these non-voters could be encouraged to register, they would undoubtedly change the election as they make up one-third of total eligible voters.

Ethics and Math

It has been demonstrated that the potential individual power of a vote is mathematically very small. It also has been shown that wasted votes can be cast for the winner of an election as well as the losers, as well as demonstrating that it is sometimes hard to predict exactly which vote will be wasted. Given this information, where do we derive the value of a vote?

It’s hard to get it purely from the math or practicality. In fact, it would seem our single vote is of very little import at all. Therefore, we must find meaning and value for our votes outside of the math.

Certainly, the Founders never envisioned an endless cycle of US citizens voting for the “lesser of two evils.”Certainly, the Founders never envisioned an endless cycle of United States citizens voting for the “lesser of two evils,” as the argument is often presented. The idea was for free and open elections where the people’s voice would be heard. It was simple: the candidate who best represented your interests earned your vote.

Your vote is, therefore, an expression of yourself and your beliefs. Your vote has power as a statement. People voting out of fear of the worst candidate is a self-perpetuating cycle. If no one ever has the courage to vote outside of the two main parties, it will never be broken. However, if enough people vote and it shows in the total election count, it will give cause for us to reconsider and embolden even more to vote outside of the two parties.

Yes, our current electoral system has some serious mathematical flaws. It simply does not encourage people to vote for their conscience – but we have seen that things are not as bad as we would be led to believe by some. The true value of a vote is in the people.

The Value of Your Vote

The value of your vote is what you give it. Should you spend it on a candidate you don’t believe in? Should it be an exercise in fear? It’s up to you. It is my hope that these mathematical calculations will bring you freedom from the idea that only majority party votes matter. A vote is a statement, a vote is personal, a vote is an expression of your citizenship in this country. If enough people vote their conscience and vote for what they believe in, things can change.

If you are already a staunch supporter of a major party, then you should vote that way. This paper is not against the major parties at all – but rather against the concept that votes somehow “belong” to only Democrats or Republicans. Votes belong to the voter. There has never been a more important time to vote your conscience.

Stephen_WeeseStephen Weese

Stephen Weese has an undergraduate degree in Computer Science from George Mason University, and a Masters in Computer Information Technology from Regis University. Stephen teaches college Math and Computer courses. He is also a speaker, a film and voice actor, and a nutrition coach.

This article was originally published on FEE.org. Read the original article.

Ontological Realism and Creating the One Real Future – Video by G. Stolyarov II

Ontological Realism and Creating the One Real Future – Video by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
August 23, 2014
******************************

An ongoing debate in ontology concerns the question of whether ideas or the physical reality have primacy. Mr. Stolyarov addresses the implications of the primacy of the physical reality for human agency in the pursuit of life and individual flourishing. Transhumanism and life extension are in particular greatly aided by an ontological realist (and physicalist) framework of thought.

References

– “Ontological Realism and Creating the One Real Future” – Essay by G. Stolyarov II
– “Objective Reality” – Video by David Kelley
A Rational Cosmology – Treatise by G. Stolyarov II
– “Putting Randomness in Its Place” – Essay by G. Stolyarov II
– “Putting Randomness in Its Place” – Video by G. Stolyarov II

Ontological Realism and Creating the One Real Future – Article by G. Stolyarov II

Ontological Realism and Creating the One Real Future – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
August 13, 2014
******************************

An ongoing debate in ontology concerns the question of whether ideas or the physical reality have primacy. In my view, the physical reality is clearly ontologically primary, because it makes possible the thinking and idea-generation which exist only as very sophisticated emergent processes depending on multiple levels of physical structures (atoms, cells, tissues, organs, organisms of sufficient complexity – and then a sufficiently rich history of sensory experience to make the formation of interesting ideas supportable).

One of my favorite contemporary philosophers is David Kelley – an Objectivist but one very open to philosophical innovation – without the dogmatic taint that characterized the later years of Ayn Rand and some of her followers today. He has recently released a video entitled “Objective Reality”, where he discusses the idea of the primacy of existence over consciousness. Here, I seek to address the primacy of the physical reality in its connection with several additional considerations – the concepts of essences and qualia, as well as the implications of the primacy of the physical reality for human agency in the pursuit of life and individual flourishing.

Essences

Some ontological idealists – proponents of the primacy of ideas – will claim that the essence of an entity exists outside of that entity, in a separate realm of “immaterial” ideas akin to Plato’s forms. On the contrary, on essences, I am of an Aristotelian persuasion that the essence of a thing is part of that very thing; it is the sum of the qualities of an entity, without which that entity could not have been what it is. The essences do not exist apart from any thing – but rather any thing of a particular sort that exists has the essence which defines it as that thing – along with perhaps some other incidental qualities which are not constitutive to it being that thing.

For instance, a chair may be painted blue or green or any other color, and it may have three legs instead of four, and it may have some dents in it – but it would still be a chair. But if all chairs were destroyed, and no one remembered what a chair was, there would be no ideal Platonic form of the chair floating out there somewhere. In that sense, I differ from the idealists’ characterization of essences as “immaterial”. Rather, an essence always characterizes a material entity or process performed by material entities.

Qualia

Qualia are an individual’s subjective, conscious experiences of reality – for instance, how an individual perceives the color red or the sound of a note played on an instrument. But qualia, too, have a material grounding. As a physicalist, I understand qualia to be the result of physical processes within the body and brain that generate certain sensory perceptions of the world. It follows that different qualia can only be generated if one’s organism has different physical components.

A bat, a fly, or a whale would certainly experience the same external reality differently from a human. Most humans (the ones whose sense organs are not damaged or characterized by genetic defects) have the same essential perceptual structures and so, if placed within the exact same vantage point relative to an object, would perceive it in the same way (with regard to what appears before their senses). After that, of course, what they choose to focus on with their minds and how they choose to interpret what they see (in terms of opinions, associations, decisions regarding what to do next) could differ greatly. The physical perception is objective, but the interpretation of that perception is subjective. But by emulating the sensory organs of another organism (even a bat or a fly), it should be possible to perceive what that organism perceives. I delve into this principle in some detail in Chapter XII of A Rational Cosmology: “The Objectivity of Consciousness”.

Importance of Ontological Realism to Life, Flourishing, and Human Agency

Some opponents of ontological realism might classify it as a “naïve” perspective and claim that those who see physical reality as primary are inappropriately assigning it “ontological privilege”. On the contrary, I strongly hold that this world is the one and that, certainly, events that happen in this world are ontologically privileged for having happened – as opposed to the uncountably many possibilities for what might have happened but did not. Moreover, I see this recognition as an essential starting point for the endeavor which is really at the heart of individual liberty, life extension, transhumanism, and, more generally, a consistent vision of humanism and morality: the preservation of the individual – of all individuals who have not committed irreparable wrongs – from physical demise.

I am not an adherent of the “many worlds” interpretation of quantum mechanics, which some may posit in opposition to my view of the primacy of the single physical reality which we directly experience and inhabit. Indeed, to me, it does not appear that quantum mechanics has a valid philosophical interpretation at all (at least not until some extremely rational and patient philosopher delves into it and tries to puzzle it out); rather, it is a set of equations that is reasonably predictive of the behavior of subatomic particles (sometimes) through a series of probabilistic models. Perhaps in part due to my work in another highly probability-driven area – actuarial science – my experience informs me that probabilistic models are at best only useful approximations of phenomena that may not yet be accessible to us in other ways, and a substantial fraction of the time the models are wildly wrong anyway. As for the very concept of randomness itself, it is a useful epistemological idea, but not a valid metaphysical one, as I explain in my essay “Putting Randomness in Its Place“.

In my view, the past is irreversible, and it happened in the one particular way it happened. The future is full of potential, because it has not happened yet, and the emergent property of human volition enables it to happen in a multitude of ways, depending on the paths we choose. In a poetic sense, it could be said that many worlds unfold before us, but with every passing moment, we pick one of them and that world becomes the one irreversibly, while the others are not retained anywhere. Not only is this understanding a necessary prerequisite for the concept of moral responsibility (our actions have consequences in bringing about certain outcomes, for which we can be credited or faulted, rewarded or punished), but it is also necessary as a foundation for the life-extension premise itself.

If there were infinitely many possible universes, where each of us could have died or not died at every possible instant, then in some of those hypothetical universes, we would have all already been beneficiaries of indefinite life extension. Imagine a universe where humanity was lucky and avoided all of the wars, tyrannies, epidemics, and superstitions that plagued our history and, as a result, was able to progress so rapidly that indefinite longevity would have been already known to the ancient Greeks! This would make for fascinating fiction, and I readily admit to enjoying the occasional retrospective “What if?” contemplation – e.g., what if the Jacobins had not taken over during the French Revolution, or what if Otto von Bismarck had never come to power in Germany, or what if the attacks of September 11, 2001 (a major setback for human progress, largely due to the reactionary violation of civil liberties by Western governments) had never happened? Unfortunately, from an ontological perspective, I do not have that luxury of rewriting the past.  As for the future, it can only be written through actions that affect the physical world, but any tools we can create to help us do this would be welcome.

This is certainly not the best of all possible worlds (a point amply demonstrated in one of my favorite works, Voltaire’s Candide), but it is the world we find ourselves in, through a variety of historical accidents, path-dependencies, and our own prior choices and their foreseen and unforeseen repercussions. But this is indeed our starting point when it comes to any future action, and the choice each of us ultimately faces is whether (i) to become a passive victim of the “larger forces” in this world (to conform or “adapt”, as many people like to call it), (ii) to create an alternate world using imagination and subjective experience only, or (iii) to physically alter this world to fit the parameters of a more just, happy, safe, and prosperous existence – a task to which only we are suited (since there is no cosmic justice or higher power). It should be clear by now that I strongly favor the third option. We should, through our physical deeds, harness the laws of nature to create the world we would wish to inhabit.

Putting Randomness in Its Place (2010) – Article by G. Stolyarov II

Putting Randomness in Its Place (2010) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
Originally Published February 11, 2010
as Part of Issue CCXXXV of The Rational Argumentator
Republished July 22, 2014
******************************
Note from the Author: This essay was originally published as part of Issue CCXXXV of The Rational Argumentator on February 11, 2010, using the Yahoo! Voices publishing platform. Because of the imminent closure of Yahoo! Voices, the essay is now being made directly available on The Rational Argumentator.
~ G. Stolyarov II, July 22, 2014
***

A widespread misunderstanding of the meaning of the term “randomness” often results in false generalizations made regarding reality. In particular, the view of randomness as metaphysical, rather than epistemological, is responsible for numerous commonplace fallacies.

To see randomness as metaphysical is to see it as an inherent aspect of reality as such – as embedded inextricably in “the way things are.” Typically, people holding this view will take it in one of two directions. Some of them will see randomness pejoratively – thinking that there is no way reality could be like that: chaotic, undefined, unpredictable. Such individuals will typically posit that, because reality cannot be random, it must therefore be centrally planned by a super-intelligent entity, such as a deity.

Others, however, will use the metaphysical perception of randomness to deny evident and ubiquitously observable truths about our world: the facts that all entities obey certain natural laws, that these laws are accessible to human beings, and that they can inform our decision-making and actions. These individuals typically espouse metaphysical subjectivism – the idea that the nature of reality depends on the person observing it, or that all of existence is in such a chaotic flux that we cannot ever possibly make sense of it, so we might as well “construct” our own personal or cultural “reality.”

But it is the very metaphysical perception of randomness that is in error. Randomness is, rather, epistemological – a description of our state of knowledge of external reality, and not of external reality itself. To say that a phenomenon is random simply means that we do not (yet) have adequate knowledge to be able to explain it causally. Based on past observational experience or some knowledge of aspects inherent to that phenomenon, we might be able to assign probabilities – estimates of the likelihood that a particular event will occur, in the absence of more detailed knowledge about the specifics of the circumstances that might give rise to that event. In some areas of life, this is presently as far as humans can venture. Indeed, probabilistic thinking can be conceptually quite powerful – although imprecise – in analyzing large classes of phenomena which, individually, exhibit too many specific details for any single mind to grasp. Entire industries, such as insurance and investment, are founded on this premise. But we must not mistake a conceptual tool for an external fact; the probabilities are not “out there.” They are, rather, an attempt by human beings to interpret and anticipate external phenomena.

The recognition of randomness as epistemological can be of great aid both to those who believe in biological evolution and to advocates of the free market. Neither the laws of evolution, nor the laws of economics, of course, would fit any definition of “randomness.” Rather, they are impersonal, abstract principles that definitively describe the general outcomes of particular highly complex sets of interactions. They are unable to account for every fact of those interactions, however, and they are also not always able to predict precisely how or when the general outcome they anticipate will ensue. For instance, biological evolution cannot precisely predict which complex life forms will evolve and at what times, or which animals in a current ecosystem will ultimately proliferate, although traits that might enhance an animal’s survival and reproduction and traits that might hinder them can be identified. Likewise, economics – despite the protestations of some economists to the contrary – cannot predict the movements of stock prices or prices in general, although particular directional effects on prices from known technological breakthroughs or policy decisions can be anticipated.

Evolution is often accused of being incapable of producing intelligent life and speciation because of its “randomness.” For many advocates of “intelligent design,” it does not appear feasible that the complexity of life today could have arisen as a result of “chance” occurrences – such as genetic mutations – that nobody planned and for whose outcomes nobody vouched. However, each of these mutations – and the natural selection pressures to which they were subject – can only be described as random to the extent that we cannot precisely describe the circumstances under which they occurred. The more knowledge we have of the circumstances surrounding a particular mutation, the more it becomes perfectly sensible to us, and explicable as a product of causal, natural laws, not “sheer chance.” Such natural laws work both at the microscopic, molecular level where the proximate cause of the mutation occurred, and at the macroscopic, species-wide level, where organisms with the mutation interact with other organisms and with the inanimate environment to bring about a certain episode in the history of life.

So it is with economics; the interactions of the free market seem chaotic and unpredictable to many – who therefore disparage them as “random” and agitate for centralized power over all aspects of human life. But, in fact, the free market consists of millions of human actors in billions of situations, and each actor has definite purposes and motivations, as well as definite constraints against which he or she must make decisions. The “randomness” of behaviors on the market is only perceived because of the observer’s limited knowledge of the billions of circumstances that generate such behaviors. We can fathom our own lives and immediate environments, and it may become easier to understand the general principles behind complex economies when we recognize that each individual life has its own purposes and orders, although they may be orders which we find mistaken or purposes of which we disapprove. But the interaction of these individual microcosms is the free market; the more we understand about it, the more sensible it becomes to us, and the more valid conclusions we can draw regarding it.

The reason why evolution and economies cannot be predicted at a concrete level, although they can be understood, is the sheer complexity of the events and interactions involved – with each event or interaction possibly being of immense significance. Qualitative generalizations, analyses of attributes, and probabilistic thinking can answer some questions pertaining to these complex systems and can enable us to navigate them with some success. But these comprise our arsenal of tools for interpreting reality; they do not even begin to approach being the reality itself.

When we come to see randomness as a product of our limited knowledge, rather than of reality per se, we can begin to appreciate how much there is about reality that can be understood – rather than dismissed as impossible or inherently chaotic – and can broaden our knowledge and mastery of phenomena we might otherwise have seen as beyond our grasp.

Click here to read more articles in Issue CCXXXV of The Rational Argumentator.

Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Conditional Probabilities and Bayes’ Theorem (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 18, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 2,100 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 18, 2014
***
When analyzing dependent events, the concept of conditional probability becomes a useful tool. The conditional probability of A given B is the probability that event A will occur, given that event B has occurred. In mathematical notation, the probability of A given B is expressed as P(A|B).
***

Bayes’ Theorem enables us to determine the probability of both of two dependent events occurring when we know the conditional probability of one of the events occurring, given that the other has occurred. Bayes’ Theorem states the probability of A and B occurring is equal to the product of the probability of B and the conditional probability of A given B or the product of the probability of A and the conditional probability of B given A:
P(A and B) = P(B)* P(A|B) = P(A)*P(B|A).

This theorem works for both independent and dependent events, but for independent events, the result is equivalent to what is given by the multiplication rule: P(A and B) = P(B)*P(A). Why is this the case? When two events are independent, the occurrence of one has no effect on whether the other will occur, so the probability of the event taking place should be equal to the conditional probability of that event given that the other event has taken place. So for independent events A and B: P(A) = P(A|B) and P(B) = P(B|A). If one ever wishes to determine whether two events are independent, it is possible to do so by computing their individual probabilities and their conditional probabilities and seeing if the former equal the latter.

The following sample problem can illustrate the kinds of probability questions that Bayes’ Theorem can be used to answer. This particular problem is of my own invention, but the first actuarial exam (Exam P) has been known to have other problems of this sort, which are virtually identical in format.

Problem: A company has four kinds of machines: A, B, C, and D. The probabilities that a machine of a given type will fail on a certain day are: 0.02 for A, 0.03 for B, 0.05 for C, and 0.15 for D. 10% of a company’s machines are of type A, 25% are of type B, 30% are of type C, and 35% are of type D. Given that a machine has failed on a certain day, what is the probability of the machine being of type B?

Solution: First, let us designate the event of a machine’s failure with the letter F. Thus, from the given information in the problem, P(A) = 0.10, P(B) = 0.25, P(C) = 0.3, and P(D) = 0.35. P(F|A) = 0.02, P(F|B) = 0.03, P(F|C) = 0.05, and P(F|D) = 0.15. We want to find P(B|F). By Bayes’ Theorem, P(B and F) = P(F)* P(B|F). We can transform this to

P(B|F) = P(B and F)/P(F). To solve this, we must determine P(B and F). By another application of Bayes’ Theorem, P(B and F) = P(B)* P(F|B) = 0.25*0.03 = 0.0075. Furthermore,

P(F) = P(A and F) + P(B and F) + P(C and F) + P(D and F)

P(F) = P(A)*P(F|A) + P(B)* P(F|B) + P(C)* P(F|C) + P(D)* P(F|D)

P(F) = 0.10*0.02 + 0.25*0.03 + 0.3*0.05 + 0.35*0.15 = 0.077So P(B|F) = P(B and F)/P(F) = 0.0075/0.077 = 15/154 or about 0.0974025974. Thus, if a machine has failed on a certain day, the probability that it is of type B is 15/154.

Ideas in Mathematics and Probability: Independent Events and Dependent Events (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Independent Events and Dependent Events (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 18, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 3,300 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 18, 2014
***
This essay discusses independent and dependent events and their role in probability theory and analyses.
***

Let us consider two events, A and B. If the probability that event A occurs has no effect on the probability that event B occurs, then A and B are independent events. A classic example of independent events is two tosses of the same fair coin. If a coin lands heads once, this has no influence on whether it will land heads again. The probability of landing heads on any given toss of the fair coin is ½.

It is a common error to presume that once a coin has landed heads for a number of times, this increases its probability of landing tails the next time it is tossed. If each toss is an independent event, this cannot be the case. Even if the coin has landed heads for 1000 consecutive times previously, its probability of landing heads the next time it is tossed is ½.

With two dependent events, on the other hand, the outcome of the first event affects the probability of the second. A classic example of such events would be drawing cards from a deck without replacement. A standard 52-card deck contains 4 aces. On the first draw, the probability of choosing an ace is 4/52 or 1/13. However, the probability of choosing an ace on the second draw will depend on whether an ace was selected on the first draw.

If an ace was selected on the first draw, there are 51 cards left to choose from, 3 of which are aces. So the probability of selecting an ace on the second draw is 3/51. But if an ace was not selected on the first draw, there are 4 aces left among 51 cards, so the probability of selecting an ace on the second draw is 4/51. Clearly, then, multiple drawings of cards from a deck without replacement are dependent events.

With any number of independent events, it is possible to use the multiplication rule to know the probability of some number of these events occurring. For example, if A, B, and C are independent events, and P(A) — the probability of A — is 1/3, P(B) is 3/5, and P(C) is 4/11, then the probability that A and B will occur is P(A)*P(B) = (1/3)(3/5) = 1/5. The probability that A, B, and C will occur is P(A)*P(B)*P(C) = (1/3)(3/5)(4/11) = 4/55.

It is important to only use the multiplication rule for independent events. With dependent events, the computation of probabilities for multiple events is not so straightforward and depends on the specific situation and dependence relationship among events. But further explorations of the world of probability theory will acquaint one with methods of analyzing probabilities of multiple dependent events as well.

Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: The Uniform Distribution (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 4,800 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The uniform distribution is alternately known as the de Moivre distribution, in honor of the French mathematician Abraham de Moivre (1667-1754) who introduced it to probability theory. The fundamental assumption behind the uniform distribution is that none of the possible outcomes is more or less likely than any other. The uniform distribution applies to continuous random variables, i.e., variables that can assume any values within a specified range.***

Let us say that a given random variable X is uniformly distributed over the interval from a to b. That is, the smallest value X can assume is a and the largest value it can assume is b. To determine the probability density function (pdf) of such a random variable, we need only remember that the total area under the graph of the pdf must equal 1. Since the pdf is constant throughout the interval on which X can assume values, the area underneath its graph is that of a rectangle — which can be determined by multiplying its base by its height. But we know the base of the rectangle to be (b-a), the width of the interval over which the random variable is distributed, and its area to be 1. Thus, the height of the rectangle must be 1/(b-a), which is also the probability density function of a uniform random variable over the region from a to b.

What is the mean of a uniformly distributed random variable? It is, conveniently, the halfway point of the interval from a to b, since half of the entire area under the graph of the pdf will be to the right of such a midway point, and half will be to the left. So the mean or mathematical expectation of a uniformly distributed random variable is (b-a)/2.

It is also possible to arrive at a convenient formula for the variance of such a uniform variable. Let us consider the following equation used for determining variance:

Var(X) = E(X2) – E(X)2 , where X is our uniformly distributed random variable.

We already know that E(X) = (b-a)/2, so E(X)2 must equal (b-a)2/4. To find E(X2), we can use the definition of such an expectation as the definite integral of x2*f(x) evaluated from b to a, where f(x) is the pdf of our random variable. We already know that f(x) = 1/(b-a); so E(X2) is equal to the integral of x2/(b-a), or x3/3(b-a), evaluated from b to a, which becomes (b-a)3/3(b-a), or (b-a)2/3.

Thus, Var(X) = E(X2) – E(X)2 = (b-a)2/3 – (b-a)2/4 = (b-a)2/12, which is the variance for any uniformly distributed random variable.

Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

Ideas in Mathematics and Probability: Covariance of Random Variables (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 5,200 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***
Analyzing the variances of dependent variables and the sums of those variances is an essential aspect of statistics and actuarial science. The concept of covariance is an indispensable tool for such analysis.
***

Let us assume that there are two random variables, X and Y. We can call the mathematical expectations of each of these variables E(X) and E(Y) respectively, and their variances Var(X) and Var(Y) respectively. What do we do when we want to find the variance of the sum of the random variables, X+Y? If X and Y are independent variables, this is easy to determine; in that case, simple addition accomplishes the task: Var(X+Y) = Var(X) + Var(Y).

But what if X and Y are dependent? Then the variance of the sum most often does not simply equal sum of the variances. Instead, the idea of covariance must be applied to the analysis. We shall denote the covariance of X and Y as Cov(X, Y).

Two crucial formulas are needed in order to deal effectively with the covariance concept:

Var(X+Y) = Var(X) + Var(Y) + 2Cov(X, Y)

Cov(X, Y) = E(XY) – E(X)E(Y)

We note that these formulas work for both independent and dependent variables. For independent variables, Var(X+Y) = Var(X) + Var(Y), so Cov(X, Y) = 0. Similarly, for independent variables, E(XY) = E(X)E(Y), so Cov(X, Y) = 0.

This leads us to the general insight that the covariance of independent variables is equal to zero. Indeed, this makes conceptual sense as well. The covariance of two variables is a tool that tells us how much of an effect the variation in one of the variables has on the other variable. If two variables are independent, what happens to one has no effect on the other, so the variables’ covariance must be zero.

Covariances can be positive or negative, and the sign of the covariance can give useful information about the kind of relationship that exists between the random variables in question. If the covariance is positive, then there exists a direct relationship between two random variables; an increase in the values of one tends to also increase the values of the other. If the covariance is negative, then there exists an inverse relationship between two random variables; an increase in the values of one tends to decrease the values of the other, and vice versa.

In some problems involving covariance, it is possible to work from even the most basic information to determine the solution. When given random variables X and Y, if one can compute E(X), E(Y), E(X2), E(Y2), and E(XY), one will have all the data necessary to solve for Cov(X, Y) and Var(X+Y). From the way each random variable is defined, one can derive the mathematical expectations above and use them to arrive at the covariance and the variance of the sums for the two variables.

Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II

Concepts in Probability Theory: Mathematical Expectation (2007) – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 17, 2014
******************************
Note from the Author: This article was originally published on Associated Content (subsequently, Yahoo! Voices) in 2007.  The article earned over 10,000 page views on Associated Content/Yahoo! Voices, and I seek to preserve it as a valuable resource for readers, subsequent to the imminent closure of Yahoo! Voices. Therefore, this essay is being published directly on The Rational Argumentator for the first time.  ***
***
~ G. Stolyarov II, July 17, 2014
***

The idea of expectation is crucial to probability theory and its applications. As one who has successfully passed actuarial Exam P on Probability, I would like to educate the general public about this interesting and useful mathematical concept.

The idea of expectation relies on some set of possible outcomes, each of which has a known probability and a known, quantifiable payoff — which can be positive or negative. Let us presume that we are playing a game called X with possible outcomes A, B, and C on a given turn. Each of these outcomes has a known probability P(A), P(B), and P(C) respectively. Each of the outcomes is associated with set payoffs a, b, and c, respectively. How much can one expect to win on an average turn of playing this game?

This is where the concept of expectation comes in. There is a P(A) probability of getting payoff a, a P(B) probability of getting payoff b, and a P(C) probability of getting payoff c. The expectation for a given turn of game X, E(X) is equal to the sum of the products of the probabilities for each given event and the payoffs for that event. So, in this case,

E(X) = a*P(A) + b*P(B) + c*P(C).

Now let us substitute some numbers to see how this concept could be applied. Let us say that event A has a probability of 0.45 of occurring, and if A occurs, you win $50. B has probability of 0.15 of occurring, and if B occurs, you lose $5. C has a probability of 0.4 of occurring, and if C occurs, you lose $60. Should you play this game? Let us find out.

E(X) = a*P(A) + b*P(B) + c*P(C). Substituting the values given above, we find that E(X) = 50*0.45 + (-5)(0.15) + (-60)(0.40) = -2.25. So, on an average turn of the game, you can be expected to lose about $2.25.

Note that this corresponds to neither of the three possible outcomes A, B, and C. But it does inform you of the kinds of results that you will approach if you play this game for a large number of turns. The Law of Large Numbers implies that the more times you play such a game, the more likely your average payoff per turn is to approach the expected value E(X). So if you play the game for 5 turns, you can be expected to lose 5*2.25 = $11.25, but you will likely experience some deviation from this in the real world. Yet if you play the game for 100 turns, you can be expected to lose 100*2.25 = $225, and your real-world outcome will most likely be quite close to this expected value.

In its more general form for some random variable X, the expectation of X or E(X) can be phrased as the sum of the products of all the possible outcomes x and their probabilities p(x). In mathematical notation, E(X) = sigma(x*p(x)) for all values of x. You can apply this formula to any discrete random variable, i. e., a random variable which assumes only a finite set of particular values.

For a continuous random variable Y, the mathematical expectation is equal to the integral of y*f(y) over the region on which the variable is defined. The function f(y) is called the probability density function of Y; its height over a given domain on a graph can be an indication the likelihood of the random variable assuming values over that domain.