Browsed by
Tag: statistics

Homicides in the US Fall for Second Year as Murder Rate Drops in 38 States – Article by Ryan McMaken

Homicides in the US Fall for Second Year as Murder Rate Drops in 38 States – Article by Ryan McMaken


Ryan McMaken
December 28, 2019
***********************

As 2018 came to an end, politicians and media pundits insisted that ” gun violence ” was growing and hitting crisis levels .

While a homicide rate of anything greater than zero is an measure of very-real human misery, it nonetheless turns out that fewer people were murdered in 2018 than in the year before. Moreover, 2018 was the second year in a row during which the homicide rate declined.

According to new homicide statistics released by the FBI last month, the homicide rate in the United States was 5 per 100,000 people. That was down from 5.3 per 100,000 in 2017 and down from 5.4 in 2016. In 2014, the homicide rate in the US hit a 57-year low, dropping to 4.4 per 100,000, making it the lowest homicide rate recorded since 1957.

 At 5 per 100,000, 2018’s homicide rate has been cut nearly in half since the 1970s and the early 1990s when the national homicide rate frequently exceeded nine percent.

The regions with the largest declines were New England and the Mountain west where homicide rates decreased 18 percent and 12 percent, respectively. The only region reporting an increase was the Mid Atlantic region, with an increase of one percent. This was driven largely by an increase in homicides in Pennsylvania.

 At the state level, the homicide rate went down in 38 states, and increased in 12.

The states with the lowest homicide rates were South Dakota, Rhode Island, New Hampshire, Vermont, and Maine. The states with the lowest rates were nearly all found in New England and in the West. For additional context, I have graphed US states with Canadian provinces (in red):

Indeed, when we map the states by homicide rate, we can see some clear regional differences:

In American political discourse, it is fashionable to insist that those places with the most strict gun control laws have the least amount of violence.

This position, of course, routinely ignores the fact that large regions of the US have very laissez faire gun laws with far lower levels of violent crime than those areas with more gun regulations. Moreover, if we were to break down the homicide rates into even more localized areas, we’d find that high homicide rates are largely confined to a relatively small number of neighborhoods within cities. Americans who live outside these areas — that is to say, the majority of Americans — are unlikely to ever experience homicide either first-hand or within their neighborhoods.

We can see the lack of correlations between gun control and homicide, for instance, if we compare state-level homicide rates to rankings of state-level gun laws published by pro-gun-control organizations.

For example, using the Giffords Center’s rankings of state gun policy, many of the states with the lowest homicide rates (South Dakota, Maine, New Hampshire, Vermont, and Utah) are states with the most laissez faire gun policies. The Giffords Center naturally ranks these states the lowest for gun policy, giving Maine and Utah grades of “F” and “D-“, respectively, although both states are two of the least violent places in all of North America.

Homicide vs. “Gun Violence”

As is so often the case when dealing with gun statistics put out by pro-gun-control groups, the Giffords Center attempts to fudge the numbers by measuring “gun deaths” rather than homicides. By design, this number includes suicides — which then makes violence rates look higher — while excluding all forms of homicide not involving guns.

Thus, a state with higher homicide rates overall — but with fewer gun homicides — will look less violent than it really is.

Meanwhile, a state with little violent crime, but with relatively high homicide rates, will be counted as a state with many “gun deaths.” These nuances are rarely explained in the public debate however, and the term “gun deaths” is just thrown around with the intent of making places with looser gun laws look like they have more crime.

Moreover, the attempt to use suicide to “prove” more guns lead to more suicides is easily shown to be baseless at the international level: the US has totally unremarkable suicide rate even though it is far easier to acquire a gun in the US than many countries with far higher suicide rates.

Mass Shootings

As the total number of homicides in the US has gone down in recent decades, many commentators have taken to fixating on mass shooting events as evidence that the United States is in the midst of an epidemic of shootings.

Mass shootings, however, occur in such small numbers as to have virtually no effect on nationwide homicide numbers.

According to the Mother Jones mass shootings listing, for examples, there were 80 deaths resulting from mass shootings in 2018, or 0.5 percent of all homicides. That was down from the 117 mass-shooting total in 2017, which was 0.7 percent of all homicides. And how will 2019 look? This year, there have been 66 mass-shooting deaths. On a per-month basis, mass shootings have so far been deadlier in 2019 than in 2018. But we could also note that although there have been 66 mass shooting victims this year, the total number of homicides in Maryland alone fell by 68 from 2017 to 2018.

And then, of course, there is the issue of crime prevention through private gun ownership. Since averted crimes are not counted in any government database, we only know how many crimes actually occur. We don’t know how many are averted due to the potential victim being armed. Nor does the homicide data differentiate between criminal homicides, and homicides committed in self defense. Thus, sloppy researchers will simply report all homicides as criminal killings. But this is not the case.

As one might expect, pro-gun-control advocates insist that the number of crimes averted due to defensive weapons is very low. But, again, there is no empirical evidence showing this. Some gun control activists will point to studies that conclude more homicides occur in areas with more guns. These studies may be getting the causality backwards, however, since we’d expect more gun ownership to result in areas that are perceived to be more crime-ridden.

Ryan McMaken (@ryanmcmaken) is a senior editor at the Mises Institute. He has degrees in economics and political science from the University of Colorado, and was the economist for the Colorado Division of Housing from 2009 to 2014. He is the author of Commie Cowboys: The Bourgeoisie and the Nation-State in the Western Genre.

What Are the Chances That a Muslim Is a Terrorist? – Article by Sanford Ikeda

What Are the Chances That a Muslim Is a Terrorist? – Article by Sanford Ikeda

The New Renaissance HatSanford Ikeda
******************************
It’s flu season and for the past two days you’ve had a headache and sore throat. You learn that 90% of people who actually have the flu also have those symptoms, which makes you worry.  Does that mean the chances of your having the flu is 90%?  In other words, if there’s a 90% chance of having a headache and sore throat given that you have the flu, does that mean there’s a 90% chance having the flu given that you have a headache and sore throat?We can use symbols to express this question as follows: Pr(Flu | Symptoms) = Pr(Symptoms | Flu) = 90%?

The answer is no. Why?

If you think about it you’ll realize that there are other things besides the flu that can give you a combination of a headache and sore throat, such as a cold or an allergy, so that having those symptoms is certainly not the same thing as having the flu.  Similarly, while fire produces smoke, the old saying that “where there’s smoke there’s fire” is wrong because it’s quite possible to produce smoke without fire.

Fortunately, there’s a nice way to account for this.

How Bayes’ Theorem Works

Suppose you learn that, in addition to Pr(Symptoms | Flu) = 90%, that the probability of a randomly chosen person having a headache and sore throat this season, regardless of the cause, is 10% – i.e. Pr(Symptoms) = 10% – and that only one person in 100 will get the flu this season – i.e. Pr(Flu) = 1%.  How does this information help?

Again, what we want to know are the chances of having the flu, given these symptoms Pr(Flu | Symptom).  To find that we’ll need to know first the probability of having those symptoms if we have the flu (90%) times the probability of having the flu (1%).  In other words, there’s a 90% chance of having those symptoms if in fact we do have the flu, and the chances of having the flu is only 1%. That means Pr(Symptoms | Flu) x Pr(Flu) = 0.90 x 0.01 = 0.009 or 0.9% or a bit less than one chance in 100.

Finally, we need to divide that result by the probability of having a headache and sore throat regardless of the cause Pr(Symptoms), which is 10% or 0.10, because we need to know if your headache and sore throat are flu Symptoms out of all headache-and-sore symptoms that have occurred.

So, putting it all together, the answer to the question, “What is the probability that your Symptoms are caused by the Flu?” is as follows:

Pr(Flu | Symptoms) = [Pr(Symptoms | Flu) x Pr(Flu)] ÷ Pr(Symptoms) = 0.90 x 0.01 ÷ 0.10 = 0.09 or 9%.

So if you have a headache and sore throat there’s only a 9% chance, not 90%, that you have the flu, which I’m sure will come as a relief!

This particular approach to calculating “conditional probabilities” is called Bayes’ Theorem, after Thomas Bayes, the 18th century Presbyterian minister who came up with it. The example above is one that I got out this wonderful little book.

Muslims and Terrorism

Now, according to some sources (here and here), 10% of Terrorists are Muslim. Does this mean that there’s a 10% chance that a Muslim person you meet at random is a terrorist?  Again, the answer is emphatically no.

To see why, let’s apply Bayes’ theorem to the question, “What is the probability that a Muslim person is a Terrorist?” Or, stated more formally, “What is the probability that a person is a Terrorist, given that she is a Muslim?” or Pr(Terrorist | Muslim)?

Let’s calculate this the same way we did for the flu using some sources that I Googled and that appeared to be reliable.  I haven’t done a thorough search, however, so I won’t claim my result here to be anything but a ballpark figure.

So I want to find Pr(Terrorist | Muslim), which according to Bayes’ Theorem is equal to…

1) Pr(Muslim | Terrorist):  The probability that a person is a Muslim given that she’s a Terrorist is about 10% according to the sources I cited above, which report that around 90% of Terrorists are Non-Muslims.

Multiplied by…

2) Pr(Terrorist):  The probability that someone in the United States is a Terrorist of any kind, which I calculated first by taking the total number of known terrorist incidents in the U.S. back through 2000 which I tallied as 121 from this source  and as 49 from this source. At the risk of over-stating the incidence of terrorism, I took the higher figure and rounded it to 120.  Next, I multiplied this times 10 under the assumption that on average 10 persons lent material support for each terrorist act (which may be high), and then multiplied that result by 5 under the assumption that only one-in-five planned attacks are actually carried out (which may be low).  (I just made up these multipliers because the data are hard to find and these numbers seem to be at the higher and lower ends of what is likely the case and I’m trying to make the connection as strong as I can; but I’m certainly willing to entertain evidence showing different numbers.)  This equals 6,000 Terrorists in America between 2000 and 2016, which assumes that no person participated in more than one terrorist attempt (not likely) and that all these persons were active terrorists in the U.S. during those 17 years (not likely), all of which means 6,000 is probably an over-estimate of the number of Terrorists.

If we then divide 6,000 by 300 million people in the U.S. during this period (again, I’ll over-state the probability by not counting tourists and visitors) that gives us a Pr(Terrorist) = 0.00002 or 0.002% or 2 chances out of a hundred-thousand.

Now, divide this by…

3) The probability that someone in the U.S. is a Muslim, which is about 1%.

Putting it all together gives the following:

Pr(Terrorist | Muslim) = [Pr(Muslim | Terrorist) x Pr(Terrorist)] ÷ Pr(Muslim) = 10% x 0.002% ÷ 1% = 0.0002 or 0.02%.

One interpretation of this result is that the probability that a Muslim person, whom you encounter at random in the U.S., is a terrorist is about 1/50th of one-percent. In other words, around one in 5,000 Muslim persons you meet at random is a terrorist.  And keep in mind that the values I chose to make this calculation deliberately over-state, probably by a lot, that probability, so that the probability that a Muslim person is a Terrorist is likely much lower than 0.02%.

Moreover, the probability that a Muslim person is a Terrorist (0.002%) is 500 times lower than the probability that a Terrorist is a Muslim (10%).

(William Easterly of New York University applies Bayes’ theorem to the same question, using estimates that don’t over-state as much as mine do, and calculates the difference not at 500 times but 13,000 times lower!)

Other Considerations

As low as the probability of a Muslim person being a Terrorist is, the same data do indicate that a Non-Muslim person is much less likely to be a Terrorist.  By substituting values where appropriate – Pr(Non-Muslim | Terrorist) = 90% and Pr(Non-Muslim) = 99% – Bayes’ theorem gives us the following:

Pr(Terrorist | Non-Muslim) = [Pr(Non-Muslim | Terrorist) x Pr(Terrorist) ÷ Pr(Non-Muslim) = 90% x 0.002% ÷ 99% = 0.00002 or 0.002%.

So one interpretation of this is that a randomly chosen Non-Muslim person is around one-tenth as likely to be a Terrorist than a Muslim person (i.e. 0.2%/0.002%).  Naturally, the probabilities will be higher or lower if you’re at a terrorist convention or at an anti-terrorist peace rally; or if you have additional data that further differentiates among various groups – such as Wahhabi Sunni Muslims versus Salafist Muslim or Tamil Buddhists versus Tibetan Buddhists – the results again will be more accurate.

But whether you’re trying to educate yourself about the flu or terrorism, common sense suggests using relevant information as best you can. Bayes’ theorem is a good way to do that.

(I wish to thank Roger Koppl for helping me with an earlier version of this essay. Any remaining errors, however, are mine, alone.)

Sanford (Sandy) Ikeda is a professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism. He is a member of the FEE Faculty Network.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author. Read the original article.

4 Ways to Misuse Gun Statistics – Article by Daniel Bier

4 Ways to Misuse Gun Statistics – Article by Daniel Bier

The New Renaissance HatDaniel Bier
******************************

There are a lot of false, misleading, or irrelevant numbers being thrown around about guns and crime, so here’s a brief guide to four potentially misleading types of statistics.

1. “The United States has a gun for every person.”

It’s practically a rule that every report about guns has to mention some version of this statistic. There are “300 million guns in the United States,” “one gun for every person,” “more guns than people.”

This number is problematic not just because the estimates are dodgy (nobody really knows how many guns there are — estimates range from 250–350 million) but also because of the way guns per capita is used interchangeably with the rate of gun ownership.

Confusing the two is a common mistake. Reported increases in guns per capita often makes it appear that a tidal wave of guns is washing over the country. The Washington Post’s Wonkblog sounds the alarm that there are now “more guns than people.” Sounds scary — we’re outnumbered!

But the General Social Survey finds that 2014 actually marked an all-time low for gun ownership in the United States. (Gallup finds different numbers, but recent surveys by Pew and YouGov essentially confirm the GSS estimate.)

Yes, maybe if you collected all the guns in the country, you could give one to each man, woman, and child, and maybe there’d even be some left over. But this isn’t how gun ownership works. Just because there’s “one gun for everyone” doesn’t mean everyone has a gun. (Easy way to check this: look around you — see any guns? No? Okay then.)

The “one gun for every person” factoid is ubiquitous because it’s easy to remember and hammers home just how many guns there are. There’s some value in pointing out the huge total number of firearms in the United States — it captures the sheer scale of the issue when people are talking about trying to regulate, control, or confiscate them.

But it’s misleading to use the per capita figure to measure the kind of prevalence of guns that matters: how many people actually have firearms?

According to the GSS, just 31 percent of Americans live in a household with a gun — down from over 50 percent in the late 1970s — and only 22 percent personally own a gun. How can this be? Because most gun owners have more than one (and stores and collectors have a whole bunch).

 2. “The US has the highest rate of gun ownership in the world.”

Kinda, sorta, probly, maybe? This again is based on the number of guns per capita. This, at least, is unequivocally clear: whatever estimate you use, the United States has more guns per person than anywhere else.

But that doesn’t necessarily mean that the rate of ownership is higher here than in other countries, even countries with a lot fewer guns per capita.

How could that be? First, survey data for a lot of countries (particularly poor or repressed countries) is dodgy, hard to collect, outdated, and there are lot of unreported or illegal firearms. But more important, again, is the issue with conflating guns per capita with the rate of gun ownership.

Depending on the year and the estimate, the US has between 79 and 113 guns per 100 people. (Note the difficulty of getting an accurate figure, even in a developed country like the United States.)

For simplicity’s sake, let’s use the most commonly cited estimate from the 2007 international Small Arms Survey (SAS): about 88 guns per 100 people.

In the same SAS, Yemen comes second with an average estimate of about 55 guns per 100 people (low estimate: 29; high estimate: 81).

Yet this doesn’t necessarily mean that the US has a higher rate of gun ownership. Remember, in the US, only one third of people live in households with guns, and only about one fifth personally own guns.

There are several ways that Yemen could have a higher rate of gun ownership.

First, guns could be more evenly distributed: Yemen is poor, and guns are expensive, so it might be that in poor countries, more families have guns, but each owns fewer on average. (For instance, some sources claim, even under Saddam Hussein, most Iraqi households had a gun.)

Second, the average American household has 2.6 people; Yemen has 6.7 — meaning that if someone owns a gun, three times more people live in that household in Yemen than in the US, on average, meaning that the household gun ownership rate could be a lot higher.

Third, the median age in Yemen is 18.6 years; in the US, it’s 37.6 years. Relative to population, Yemen has a lot more children than the US, so the rate of gun ownership among adults could be higher than in the US.

Serbia is also sometimes cited as having the second most guns per capita, but it’s hard to know because estimates vary so widely. According a report from Radio Free Europe, “Some 15 percent of Serbia’s citizens legally own firearms.” Serbs have 1.2 million legally registered firearms, but some estimates of illegal firearms more than double that figure to 2.7 million guns.

Assuming that the legal gun owners don’t also own all of the illegal guns, illegal weapons could easily make the actual rate of gun ownership among Serbia’s seven million people higher than the US rate of 22 percent.

The same could also be true in developed countries like Switzerland and Finland (each with an estimated 45 guns per 100 people).

It’s definitely true that the US has the most guns in the world, but it isn’t certain that it has the highest rate of gun ownership.

What does this imply? I suspect it means very little — making uncontrolled international comparisons is generally deceptive — but given the ubiquity of the claim, a lot of people seem to think it matters a great deal to their argument. That it isn’t clear this claim even is a fact should, perhaps, give them pause.

3. Conflating suicides with homicides

The Washington Post’s Fact Checker gave President Obama “two pinocchios” (signifying “significant omissions and/or exaggerations”) for his claim that “states with the most gun laws tend to have the fewest gun deaths.”

Setting aside the ambiguity of what it means to have the “most gun laws,” let’s pay attention to that last phrase. You’ll hear “gun deaths” or “gun-related deaths” referenced a lot when discussing statistics on shootings and gun control.

But, as Reason’s Jacob Sullum points out, about two-thirds of gun deaths are suicides.

While suicide is an important issue, it has nothing to do with crime, murder, or mass shootings. (And the research is mixed about whether restricting gun ownership reduces suicide.) Lumping suicide in with murder roughly triples the number of “gun deaths,” but it’s a deceptive way to look at the problem of violence committed with guns.

Both Sullum and WaPo’s fact checkers found that when you only look at states’ rate of gun homicides, excluding suicides, it makes a huge difference:

Alaska, ranked 50th [the highest in rate of gun deaths] … moved up to 25th place. Utah, 31st on the list, jumped to 8th place. Hawaii remains in 1st place, but the top six now include Vermont, New Hampshire, South Dakota, Iowa and Maine. Indeed, half of the 10 states with the lowest gun-death rates turn out to be states with less-restrictive gun laws.

Meanwhile, Maryland — a more urban state — fell from 15th place to 45th, even though it has very tough gun laws. Illinois dropped from 11th place to 38th, and New York fell from 3rd to 15th.

Suicide and murder have very different causes, consequences, and solutions, and they should always be discussed separately. When they aren’t, it’s a good time to be skeptical.

4. Juxtaposing two random numbers

This is a popular genre of pseudo-statistics, in which people throw together two totally unrelated numbers to try to inflate or downplay one of them.

For instance, the New York Times’s Nicholas Kristof claims, “In America, more preschoolers are shot dead each year (82 in 2013) than police officers are in the line of duty (27 in 2013).”

This is so irrelevant and so meaningless that I’m at a loss as to how it even occurred to Kristof to make this comparison. It serves no purpose at all but to emotionally rig the conversation.

There are maybe several hundred thousand police officers in the United States. There are 20 million children under age five.

What on earth could it mean that there are more preschoolers who die from guns than police killed in the line of duty? Do we have some reason to expect there should be a relationship, higher or lower or parity, between those numbers?

Or is it just that any number of tragedies above zero is going to churn up people’s emotions?

We’re not even comparing the same things: 27 felony murders of police with 82 gun-related deaths of children under five. According to the CDC, 30 of the gun-related deaths were accidents and one was “undetermined intent,” so there were actually 51 felony shooting deaths (typically, stray bullets from other crimes).

Kristof also used the 2013 figure for police murders, but 2013 was an aberrantly low year for cop killings. In 2014, 51 officers were killed in the line of duty; in 2011, it was 72. Presumably he thought it made a better comparison, but it’s just false to say 27 police are killed “each year.” Since 1980, the average is 64 officers killed each year.

What does this prove about the risk of gun violence? Absolutely nothing. And it is precisely as meaningful as Kristof’s comparison, or the common refrain that “more Americans have been murdered with guns in the last X years than in X wars.” There’s not even a suggestion about how these numbers should be related.

In America today, there are more preschoolers who drown (416 in 2013) than firefighters who die in the line of duty (97 in 2013).

What does this mean for the debate about water-related activities? Less than nothing.

Numbers don’t tell us what to do; at best, they tell us what we can do.

There’s no denying America has a lot of guns and a lot of gun crime (although much less than it used to). But numbers won’t tell us what to make of these facts. First, the raw facts of our situation are not as clear as we think, and to the extent we understand them, they don’t tell us much about our policy options. They won’t tell us what we should do about gun crime, or if there’s anything we constitutionally can do (with respect to gun ownership), or if those things sacrifice other important values.

Yet, too often, the debate consists of flinging random numbers and dubious statistics around and then emoting about them. Noting these problematic figures doesn’t prove anything one way or another about any particular policy; instead, let’s first clear out the rubbish so we can actually see the ground we’re fighting over.


Daniel Bier

Daniel Bier is the editor of FEE.org. He writes on issues relating to science, civil liberties, and economic freedom.

This article was originally published on FEE.org. Read the original article.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author.

Banning “Assault Weapons” Will Not Save Lives – Article by Corey Iacono

Banning “Assault Weapons” Will Not Save Lives – Article by Corey Iacono

The New Renaissance HatCorey Iacono
******************************

Last weekend, America regrettably witnessed one of the deadliest mass shootings in the country’s history at a gay nightclub in Orlando, Florida, in which 49 people were murdered and over 50 injured. The atrocity was carried out by a fanatic who pledged allegiance to the Islamic State, using a civilian semi-automatic rifle, the Sig Sauer MCX. (Early reports that it was an AR-15 were mistaken.)

In the wake of this attack, many people have laid the blame on America’s relatively lax gun laws, arguing that so-called “assault weapons” (more appropriately known as semi-automatic rifles) and high-capacity magazines should be banned from civilian use.

They note that many of the deadliest shootings in American history have involved rifles like the AR-15, and they propose that such rifles should be banned to prevent heinous crimes like the Orlando massacre from occurring in the future.

Homicides Dehomogenized

But while it may be true that many mass shootings involved semi-automatic rifles, these events are rare. In fact, the latest data (2014) from the FBI show that all types of rifles were only confirmed to have been used in 248 homicides, down from 351 in 2009. Given the total number of homicides (11,961), rifles were confirmed to have been used in only two percent of murders.

You’re more likely to be stabbed, strangled, or beaten to death with bare hands than killed by someone with a rifle.

It’s impossible to know the true number of murders involving “assault weapons,” because the term is so nebulous, and because the FBI only looks at the categories of rifle, shotgun, and handgun. There are also nearly 2,000 gun murders in which the type of firearm used is unknown. But a rough estimate of 328 homicides with all rifles (extrapolated from rifle’s share of gun murders where the type of weapon is known) is probably close to the truth.

To be very generous to the assault weapon ban argument, let’s assume that all of these 328 murders were done with assault weapons. That would imply that such weapons were involved in less than three percent of all homicides in the United States, at most.

Such deaths are as terrible as any murder, but it is also true that knives, blunt objects, and hands/feet were confirmed to have been used in 1,567, 435, and 660 murders respectively. You are much more likely to be stabbed, strangled, or beaten to death with bare hands than killed by someone with a rifle, and the chances of being killed with an “assault-type rifle” are necessarily lesser still.

Bans Don’t Work

There is also little evidence that these weapons bans have worked in the past. From 1994 to 2004, Congress banned the manufacture, sale, or transfer of a large number of “assault weapons” (including some handguns and high-capacity magazines). An assessment study commissioned by the Department of Justice in 2004 found no evidence that the ban had had any effect on gun violence and concluded that “should it be renewed, the ban’s effects on gun violence are likely to be small at best and perhaps too small for reliable measurement.”

Violent ideologues will not be deterred from their paths of destruction by minor inconveniences.

Research by economist Mark Guis of Quinnipiac University revealed no evidence that either state or federal “assault weapons” bans reduced firearm-homicide rates. Carlisle E. Moody of the College of William and Mary found no evidence that the federal ban on high-capacity magazines had any effect on homicide rates.

Regarding terrorist attacks like the one in Orlando, it’s not clear, even in retrospect, that they would be prevented by more restrictive gun control measures. Stringent gun laws in California and France failed to prevent the recent massacres in San Bernardino and Paris. People driven to violence by ideology will not be easily deterred from their paths of destruction by minor inconveniences; it is simply naïve to believe that smaller magazines or not having a folding stock would have stopped them.

In any event, keeping in mind the horrors that mass shootings entail, “assault weapons” are not even connected to a significant amount of crime in the United States. Even if confiscating and banning them completely erased homicides with committed with them, and the perpetrators didn’t substitute them with other legally available firearms, the effect on homicide rates would be statistically very small.

Many Americans simply don’t believe that some of the most popular rifles in America (overwhelmingly owned for legal and peaceful reasons) should be banned or that tens of millions of Americans’ rights should be infringed upon for so little to show for it. If you care about violence in America, you shouldn’t waste your time on the red herring of “assault weapons.”


Corey Iacono

Corey Iacono is a student at the University of Rhode Island majoring in pharmaceutical science and minoring in economics. He is a Foundation for Economic Education (FEE) 2016 Thorpe Fellow.

This article was originally published on FEE.org. Read the original article.

GDP Economics: Fat or Muscle? – Article by David J. Hebert

GDP Economics: Fat or Muscle? – Article by David J. Hebert

The New Renaissance Hat
David J. Hebert
November 1, 2014
******************************

Recently, Italy “discovered” it was no longer in a recession. Why? The nation started counting GDP figures differently.

Adding illegal revenue from hookers, narcotics and black market cigarettes and alcohol to the eurozone’s third-biggest economy boosted gross domestic product figures.

GDP rose slightly from a 0.1 percent decline for the first quarter to a flat reading, the national institute of statistics said.

Italian officials are, of course, celebrating. In politics, perceptions are more important than reality. But such celebration is troubling for several reasons, which have less to do with headlines or black markets and more to do with fat.

One of F. A. Hayek’s lasting insights was that aggregate variables mask an economy’s underlying structure. For example, a country’s GDP can be calculated by summing the total amount of consumption, investment, government spending, and net exports in a given year. The higher this number, the better an economy is supposed to be doing. But adding these figures together and looking only at their sum can be wildly misleading.

One way to illustrate why is through the following example: I am currently six foot one and weigh 217 pounds. As it turns out, Adrian Peterson, a running back for the NFL’s Minnesota Vikings, is the same height and weight. Looking at only these two variables, Peterson and I are identical. Obviously, this isn’t true.

Likewise, cross-country GDP comparisons are difficult to make. If two nations grow at the same rate, for example, but one nation “invests” in useless boondoggles while the other grows sustainable businesses, we wouldn’t want to claim that both countries have equally healthy economies.

But what about comparisons of a country’s year-to-year GDP? Is this valuable information? Well, yes and no.

If we know that more stuff is being produced this year than last year, we can infer that more activity is happening. However, this doesn’t mean that government should subsidize production in order to increase activity. In that case, all they’re accomplishing is increasing the number of things that are being done at the expense of other things that could have been done.

What economists should be looking for are increases in economically productive activity from year to year. For example, digging a hole and then filling it back in does increase the measure of activity, but it’s not adding any value to society. Digging a hole in your backyard and filling it with water is also activity, but it’s productive because you now have a swimming pool, which you value enough to employ people to create.

It’s no mystery that Italy is seeing a higher GDP as a result of its change in measurement and that as a result it’s avoided a recession on paper. That is, it’s counting more activities as “productive” than it was previously. It is wrong to conclude, though, that more production is actually happening in Italy. These activities were happening before; they just weren’t being counted in any official statistics.

There are many problems with using GDP as a measure for an economy’s health. Changing what counts toward GDP only introduces yet another confounding factor. When I step on the scale, I can get some basic idea of how healthy I am. But when I take my shoes off and step on the scale again, I didn’t magically become healthier. I just changed what’s counting toward my weight. It would be wrong for me to conclude that I can skip the gym today as a result of this recorded weight loss. Similarly, citizens of Italy should not be celebrating their increased GDP. They still face the same problems as before and must still address them.

David Hebert is an Assistant Professor of Economics at Ferris State University. His interests include public finance and property rights.

This article was originally published by The Foundation for Economic Education.

Drug Warriors Claim Colorado Going to Pot – Article by Mark Thornton

Drug Warriors Claim Colorado Going to Pot – Article by Mark Thornton

The New Renaissance Hat
Mark Thornton
September 20, 2014
******************************

As we moved into the second half of 2014, I was eager to learn if marijuana legalization in Colorado was succeeding. At first there was little being reported, but eventually reports started appearing in the news. Business Insider reported that “Legalizing Weed in Colorado Is A Huge Success,” although they did temper their report with a “Down Side” as well. Jacob Sullum reported that such things as underage consumption and traffic fatalities have fallen, although the declines were statistically insignificant and part of already declining trends in the statistics.

The important thing for me is that things did not get much worse according to these reports. When you open the door to a newly legal recreational drug via a very clunky regulatory circus, and where the government gives its seal of approval, there are bound to be growing pains and tragic cases. For example, one college student jumped to his death after ingesting six times the recommended number of pot-infused cookies.

The third report I came across was an editorial from the venerable Heritage Foundation. Given the previous reports, I was astonished to learn that in Colorado marijuana use was associated with an increase in highway fatalities, DUI arrests, underage consumption, drug-related student expulsions, college student use, and marijuana-related emergency room visits and hospitalizations.

The editorial concludes: “Drug policy should be based on hard science and reliable data. And the data coming out of Colorado points to one and only one conclusion: the legalization of marijuana in the state is terrible public policy.”

However, I began to get suspicious when I found out that the “hard science and reliable data” were not collected, produced, or analyzed by the Heritage Foundation, but by some outfit named the “Rocky Mountain High Intensity Drug Trafficking Area” program. They produced the report entitled “The Legalization of Marijuana in Colorado: The Impact,” which strongly calls into question the legalization of marijuana in Colorado.

There was no information about the “Rocky Mountain High Intensity Drug Trafficking Area” program (RMHIDRA) in the report other than it was produced by the “Investigative Support Center” in Denver, Colorado. It turns out the program is actually controlled by the White House Office of National Drug Control Policy, otherwise known as the Drug Czar.

Colorado has been in the process of legalizing marijuana since 2000. It initially started small with limited medical marijuana and as of January of 2014, it has legalized both medicinal and recreational marijuana with local option for commercial production and retail distribution. So we should expect, ceteris paribus, that the full price to consumers has fallen and that consumption for medical and recreational use has increased.

One of the most distressing empirical results in the RMHIDRA report was that while overall traffic fatalities decreased 14.8 percent between 2007 and 2012 in Colorado, traffic fatalities involving drivers, pedestrians and bicyclists that tested positive for marijuana increased by 100 percent. This data is exploited over several pages of the report using a variety of tables and charts.

If Coloradoans were consuming more marijuana and relatively less alcohol, we would expect the number of traffic fatalities to decrease because marijuana has been found to be relatively much safer than alcohol in terms of driving and motor skills. But the data indicating a 100 percent increase in fatalities involving marijuana is puzzling, disturbing, and at odds with “hard science.” If this was indeed “reliable data” it would indicate that marijuana consumption in Colorado had greatly increased beyond anyone’s estimation.

It turns out RMHIDRA’s data was anything but “reliable” and would be best characterized as misleading. If you examine the footnote section of the report you will find that the data from 2012 “represents 100 percent reporting” due to the efforts of RMHIDA to scour several data sources. However, a footnote reveals that in the data from 2006 through 2012 a very slight majority of cases, 50.13 percent were not tested! If 100 percent were tested in 2012, then the percent tested for 2006–2011 is far less than 50 percent.

What this means is that if you increased blood testing to 100 percent in 2012 when you were testing less than 50 percent of cases in prior years that you should expect to find at least a 100 percent increase involving traffic fatalities with some detection of marijuana. This result not only brings into question the reports “reliable data,” it brings into serious question RMHIDRA’s respect for “hard science.”

Another basic problem with their data is the meaning of “testing positive for marijuana.” Marijuana’s active ingredient THC can remain detectable days and weeks after it has been consumed. In contrast, marijuana impairment only lasts for several hours and is somewhat offset by safer driving behaviors, such as driving at slower speeds and avoiding high traffic areas. Given that marijuana consumption has increased significantly since 2000 we should indeed expect many more positive blood tests, but without making the leap that marijuana consumption is causing more highway fatalities.

The RMHIDRA report also offers up some dreary data on youth marijuana use. In particular, they conclude that marijuana use by young Coloradans is higher than the national average and increasing. Most importantly, they point out that between the 2008–09 and 2012–/13 school years there was a 32 percent increase in drug-related suspensions and expulsions in Colorado.

Other experts using different data sources believe that there has actually been a secular trend of decreasing marijuana use by the young people of Colorado throughout the entire legalization process. However, with respect to suspensions and expulsions, there has indeed been a 32 percent increase in the number of drug-related suspensions and expulsions.

However, weighted on a per pupil basis, there has been virtually no increase in the rate of drug-related suspensions and expulsions. The numerical increase in suspensions and expulsions is more than completely accounted for by the increased number of pupils and the relative increase of impoverished minority groups.

In addition, “drug related” suspensions and expulsions involves other drugs besides marijuana, such as cocaine, heroin, and methamphetamine. Marijuana-related suspensions and expulsions are overwhelmingly related to “possession” and “under the influence,” not things like violence, property destruction, and classroom disturbances.

The RMHIDRA report spans over 150 pages, but everything I had time to examine was either clearly wrong, misleading, or intentionally sensational.

Of course there are other sources of misinformation on the relative risks of cannabis. For example, there are academics who warn of the dangers of cannabis while they are receiving money from the pharmaceutical pain drug companies. This again suggests a deliberate attempt to mislead the public.

This is particularly disturbing and relevant information given recent reports which indicate that relatively fewer overdose painkiller deaths are occurring in states with medical marijuana laws.

There are clearly some things wrong with Colorado’s approach to legalizing marijuana and there are clearly going to be some bad results at the individual and state level, but this report is not the right way of determining and correcting those problems. As the Heritage Foundation editorial concluded: “Drug policy should be based on hard science and reliable data.”

Mark Thornton is a senior resident fellow at the Ludwig von Mises Institute in Auburn, Alabama, and is the book review editor for the Quarterly Journal of Austrian Economics. He is the author of The Economics of Prohibition, coauthor of Tariffs, Blockades, and Inflation: The Economics of the Civil War, and the editor of The Quotable Mises, The Bastiat Collection, and An Essay on Economic Theory. Send him mail. See Mark Thornton’s article archives.

This article was published on Mises.org and may be freely distributed, subject to a Creative Commons Attribution United States License, which requires that credit be given to the author.