Browsed by
Tag: error

The Importance of Free Speech to Human Progress – Article by Iain Murray

The Importance of Free Speech to Human Progress – Article by Iain Murray

The New Renaissance Hat
Iain Murray
January 10, 2015
******************************

From Principia Mathematica to Charlie Hebdo

 

The massacre of 12 cartoonists and journalists at Charlie Hebdo magazine in Paris this week should remind us to ask: Why is free speech so important?

It is more than an inalienable individual right; it is fundamental to human progress. That is why it is one of the most important institutions of liberty.

When we look at the history of the freedom of speech in the West, we see that early on it was tied up with the freedom of the press, which is why the terms are used interchangeably in American constitutional theory. Yet, for most of the West’s history, the idea of “publishing” was meaningless. Books were copied by hand, first by scribes hired by Roman nobles to copy books they liked, then by monks in medieval scriptoria, with the more ancient texts copied as practice for copying the more important religious texts. As a result, many texts were lost, with others surviving by mere chance.

Having assumed the role of guardian of learning, the medieval church was ill-disposed toward innovations that threatened its position. The suppression of early English versions of the Bible is a case in point. Information traveled slowly, impeding the progress of intellectual innovation.

The printing press changed all that, as it brought about the first series of real struggles over freedom of speech. Ideas could travel more quickly, and literacy exploded.

As people could finally read the Bible for themselves, Reformation movements grew all over Europe. Then they took to using the press to spread other ideas. In response, the church and its allies in positions of power took steps to restrain this new free press. In fact, early copyright law arose from efforts to regulate the production of printers.

It should not surprise us that early libertarians were often printers. “Freeborn John” Lilburne was first arrested for printing and circulating unlicensed books.

The great poet John Milton wrote perhaps the first great defense of free speech when the English republican Parliament reintroduced censorship via the Licensing Order of 1643 (censorship had effectively been abolished in 1640 along with the Star Chamber, which tried Lilburne). In his Areopagitica, Milton passionately demanded freedom of the press and tolerance of heterodox publications, saying, “Give me the liberty to know, to utter, and to argue freely according to conscience, above all liberties.”

The licensing order lapsed in 1694 as a result of the Glorious Revolution of 1685, which instituted a more liberal constitution in England and helped to inspire the American Revolution — and eventually the Bill of Rights and First Amendment. But the Areopagitica is still with us. Fittingly, the US Supreme Court cited it as an authority on the inherent value of false statements in the landmark case New York Times v. Sullivan:

Even a false statement may be deemed to make a valuable contribution to public debate, since it brings about “the clearer perception and livelier impression of truth, produced by its collision with error.” Mill, On Liberty (Oxford: Blackwell, 1947), p. 15; see also Milton, Areopagitica, in Prose Works (New Haven, CT: Yale, 1959), vol. 2, p. 561.

The free press opened new communication channels for theoretical innovation. It is often noted that Sir Isaac Newton was born the day Galileo died. What enabled Newton to take Galileo’s experiments and turn them into modern physics was the printing press. Newton published Principia Mathematica in 1687, and revised it in 1713 and 1726. The book was published by the Royal Society, founded in Oxford in 1660, which essentially invented peer review (see this here fascinating series of videos on the society’s role in the invention of modern science). Newton’s book spread throughout Europe, which would not have been possible under earlier regimes where printing was tightly controlled.

Central to the principle of a free press is the right to be wrong — which enables peer review and criticism in the first place. It is also central to scientific and technological innovation and experimentation, and therefore also central to economic progress, which has led to the great explosion in human welfare we have seen over the last two centuries. Free speech allows more ideas to “have sex,” to use Matt Ridley’s phrase, and that is why societies that are frightened by the consequences of this ideological sexual revolution are those with the most severe censorship laws.

At this point, one might argue that it is absurd to compare a “blasphemous” cartoon to the Principia Mathematica. But that would be a mistake. As Stephen Law has written for the Center for Inquiry, the point of such cartoons is not to cause offense, but something far greater:

More often than not, the lampooning is done with intention of shattering, if only for a moment, the protective façade of reverence and deference that has been erected around some iconic figure or belief, so that we can all catch a glimpse of how things really are.

It is exactly that goal — to help us determine what actually is, rather than what is simply asserted — that free speech and free inquiry make possible. As an institution of liberty, free speech must be defended wherever it is attacked. (My colleague Hans Bader has written elsewhere about letting down our guard.) Those who seek to suppress free speech want to keep mankind mired in poverty and ignorance, subject to their own whims and beliefs. They cannot be allowed to succeed.

Iain Murray is vice president at the Competitive Enterprise Institute.

This article was originally published by The Foundation for Economic Education.

Plot Holes in Fiction and in Life – Article by Sanford Ikeda

Plot Holes in Fiction and in Life – Article by Sanford Ikeda

The New Renaissance Hat
Sanford Ikeda
August 23, 2014
******************************

Fans of J.R.R. Tolkien’s trilogy The Lord of the Rings (LOTR) have long been aware of a possible plot hole. The central narrative concerns the hero, Frodo Baggins, who must destroy a powerful ring by walking through forbidding terrain and defeating or eluding monstrous foes and throwing the ring into a live volcano. The journey takes many months and costs Frodo and his companions dearly.

Over the years, many readers have noticed a much easier and less dangerous solution. Why, they ask, didn’t Frodo just have Gandalf ask his friends the mighty eagles to fly him swiftly over enemy territory so he could then simply toss the ring into the volcano? I’ve run across this post on Facebook a few times, which cleverly patches that hole with only a slight change in the narrative. (Others argue that there’s really no hole to patch because the “eagle solution” itself has flaws. And so the debate continues.)

Anyway, it occurred to me that the kind of social theory that I and many Austrian economists engage in could usefully be framed in terms of plot holes.

What’s a plot hole?

I’ll define a plot hole as a failure of logic, a factual mistake, or an obvious solution to a critical problem central to a story. (Here’s a slightly different definition from Wikipedia.) Of course, any particular plot hole may involve more than one of these errors of fact, logic, or perception, and there may be more kinds of plot holes than these. But here are examples of each of the ones I’ve mentioned. They come from movies, but some of them, such as the plot hole in Lord of the Rings, have literary counterparts.

Factual hole: In the movie Independence Day, key characters survive a massive fireball by ducking into the open side-door of a tunnel just as the inferno blasts by. Anyone who knows about firestorms would tell you that the super-heated air alone would instantly kill anyone in that situation.

Logical hole: In Citizen Kane, miserable Charles Foster Kane dies alone. How then does anyone know that his last word was “Rosebud”? Keep in mind that it’s a reporter’s search for the meaning of that word that drives the story forward.

Perceptual hole: The LOTR problem mentioned above is an example of this. No one seems to realize that there may be a much safer and effective way to defeat the enemy.

I would think that one of the things that makes writing fiction difficult is that events and characters have to hang together. The writer needs always to keep in mind the rules of the universe she’s creating, to recall what her characters know and when they know it, and to make sure that these details all constrain every action and event.

Life is full of “plot holes”

In real life, we make mistakes all the time. I think it’s interesting that those mistakes appear to fit neatly into the three categories of plot holes I’ve identified.

Factual hole/error: A person who doesn’t know the difference between liters and gallons buys a 100-liter barrel to hold 100 gallons of rainwater. No explanation necessary.

Logical hole/error: Thinking that since you’ve made a string of bad investment decisions, your next decision is therefore more likely to be a good one. But it’s quite the contrary: If you’ve been consistently making bad decisions, it follows that if nothing else changes, your next decision will also be bad one. (See “gambler’s fallacy.”)

Perceptual hole/error: Selling your car for $15,000 when, unbeknownst to you, you could have sold it for $20,000. The better deal simply escapes your notice and, if you were ever to learn about it, you would feel regret.

Here’s the difference though: In fiction, a writer can get away with any of these three plot holes as long as no reader sees it. Even if you do notice one, but you otherwise enjoy the story, you might be willing to overlook it. But in real life, you can’t ignore factual or logical plot holes. If you try to, they will come back and bite you. It will be painfully obvious that you can’t put 100 gallons of water into a 100-liter barrel. And if you bet on your next investment being a winner because you’ve just had a bunch of losers, it’s very likely that you’ll be disappointed. These kinds of holes you’re bound to discover.

I wrote about errors in an earlier column, but the distinction comes from my great teacher Israel Kirzner. He identifies a class of errors that derive from “overoptimism.” The more optimistic you are, the more likely it is that you’ll deliberately pass up solid opportunities for gain and thus the more likely it is that you’ll be disappointed. That’s not to say that optimism is a bad thing. If you weren’t optimistic and so never acted on that optimism, you’d never know if that optimism were warranted or not. You would never learn.

The other kind of error, what Kirzner calls “overpessimism,” happens when you’re so pessimistic that you unwittingly pass up a realizable opportunity. And because you don’t take chances, you don’t learn. This type of error is akin to a perceptual hole. Thinking you can only get $15,000 for your car means not selling to someone who would in fact pay more. Here, it’s not inevitable that you will discover your error because, after all, someone does buy your car (for $15,000). But you could have done better if you’d been more alert.

So errors of overpessimism, what I’m calling perceptual holes, are very different from factual and logical holes in that they are much harder to detect.

Plot holes and social theory

For many Austrian economists like me, economics, as a branch of a social theory, accepts as a datum that people are prone to make mistakes. But given the right rules of the game—private property, free association—they can discover those mistakes and correct them via an entrepreneurial-competitive process. Unlike plot holes in fiction writing, then, plot holes in living social systems are a feature, not a bug.

So our challenge as flesh-and-blood people, and what makes our lives interesting, is to discover plot holes, especially perceptual ones, and to fill them in. The challenge of social theorists is to understand as much we can about how that happens. In novels it’s the people outside the story who discover holes; in society it’s the people living the story who do.

Plot holes in novels spell failure. Plot holes in real life mean opportunity.

Sanford Ikeda is an associate professor of economics at Purchase College, SUNY, and the author of The Dynamics of the Mixed Economy: Toward a Theory of Interventionism.
***
This article was originally published by The Foundation for Economic Education.
Philosophy Lives – Contra Stephen Hawking – Video by G. Stolyarov II

Philosophy Lives – Contra Stephen Hawking – Video by G. Stolyarov II

Mr. Stolyarov’s refutation of Stephen Hawking’s statement that “philosophy is dead.”

In his 2010 book The Grand Design, cosmologist and theoretical physicist Stephen Hawking writes that science has displaced philosophy in the enterprise of discovering truth. While I have great respect for Hawking both in his capacities as a physicist and in his personal qualities — his advocacy of technological progress and his determination and drive to achieve in spite of his debilitating illness — the assertion that the physical sciences can wholly replace philosophy is mistaken. Not only is philosophy able to address questions outside the scope of the physical sciences, but the coherence and validity of scientific approaches itself rests on a philosophical foundation that was not always taken for granted — and still is not in many circles.

References
– “Philosophy Lives – Contra Stephen Hawking” – Essay by G. Stolyarov II
– “The Grand Design (book)” – Wikipedia
– “Stephen Hawking” – Wikipedia

Philosophy Lives – Contra Stephen Hawking – Article by G. Stolyarov II

Philosophy Lives – Contra Stephen Hawking – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
January 1, 2013
******************************

In his 2010 book The Grand Design, cosmologist and theoretical physicist Stephen Hawking writes that science has displaced philosophy in the enterprise of discovering truth. While I have great respect for Hawking both in his capacities as a physicist and in his personal qualities – his advocacy of technological progress and his determination and drive to achieve in spite of his debilitating illness – the assertion that the physical sciences can wholly replace philosophy is mistaken. Not only is philosophy able to address questions outside the scope of the physical sciences, but the coherence and validity of scientific approaches itself rests on a philosophical foundation that was not always taken for granted – and still is not in many circles.

Hawking writes, “Living in this vast world that is by turns kind and cruel, and gazing at the immense heavens above, people have always asked a multitude of questions: How can we understand the world in which we find ourselves? How does the universe behave? What is the nature of reality? Where did all this come from? Did the universe need a creator? Most of us do not spend most of our time worrying about these questions, but almost all of us worry about them some of the time. Traditionally these are questions for philosophy, but philosophy is dead. Philosophy has not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.

I hesitate to speculate why Hawking considers philosophy to be “dead” – but perhaps this view partly arises from frustration at the non-reality-oriented teachings of many postmodernist philosophers who still prevail in many academic and journalistic circles. Surely, those who deny the comprehensibility of reality and allege that it is entirely a societal construction do not aid in the quest for discovery and understanding of what really exists. Likewise, our knowledge cannot be enhanced by those who deny that there exist systematic and specific methods that are graspable by human reason and that can be harnessed for the purposes of discovery. It is saddening indeed that prominent philosophical figures have embraced anti-realist positions in metaphysics and anti-rational, anti-empirical positions in epistemology. Physicists, in their everyday practice, necessarily rely on external observational evidence and on logical deductions from the empirical data. In this way, and to the extent that they provide valid explanations of natural phenomena, they are surely more reality-oriented than most postmodernist philosophers. Yet philosophy does not need to be this way – and, indeed, philosophical schools of thought throughout history and in the present day are not only compatible with the scientific approach to reality, but indispensable to it.

Contrary to the pronouncements of prominent postmodernists, a venerable strain of thought – dating back to at least Aristotle and extending all the way to today’s transhumanists, Objectivists, and natural-law thinkers – holds that an objective reality exists, that it can be understood through systematic observation and reason, and that its understanding should be pursued by all of us. This is the philosophical strain responsible for the accomplishments of Classical Antiquity and the progress made during the Renaissance, the Enlightenment, the Industrial Revolution, and the Information Revolution. While such philosophy is not the same as the physical sciences, the physical sciences rely on it to the extent that they embrace the approach known as the scientific method, which itself rests on philosophical premises. These premises include the existence of an external reality independent of the wishes and imagination of any observer, the existence of a definite identity of any given entity at any given time, the reliance on identical conditions producing identical outcomes, the principles of causation and non-contradiction, and the ability of human beings to systematically alter outcomes in the physical world by understanding its workings and modifying physical systems accordingly. This latter principle – that, in Francis Bacon’s words, “Nature, to be commanded, must be obeyed” – was the starting point for the Scientific Revolution of the 17th Century, which inaugurated subsequent massive advances in technology, standards of living, and human understanding of the universe.  Even those scientists who do not acknowledge or explicitly reject the importance of philosophy nonetheless implicitly rely on these premises in the very conduct of their scientific work – to the extent that such work accurately describes reality. These premises are not the only ones possible – but they are the only ones that are fully right. Alternatives – including reliance on alleged supernatural revelation, wishful thinking, and unconditional deference to authority – have been tried time and again, only to result in stagnation and mental traps that prevented substantive improvements to the human condition.

But there is more. Not only are the physical sciences without a foundation if philosophy is to be ignored, but the very reason for pursuing them remains unaddressed without the branch of philosophy that focuses on what we ought to do: ethics. Contrary to those who would posit an insurmountable “is-ought” gap, ethics can indeed be derived from the facts of reality, but not solely by the tools of physics, chemistry, biology, or any others of the “hard” physical sciences. An additional element is required: the fact that we ourselves exist as rational, conscious beings, who are capable of introspection and of analysis of external data. From the physical sciences we can derive ways to sustain and improve our material well-being – sometimes our very survival. But only ethics can tell us that we ought to pursue such survival – a conclusion we reach through introspection and logical reasoning. No experiment, no test is needed to tell us that we ought to keep living. This conclusion arises as antecedent to a consistent pursuit of any action at all; to achieve any goal, we must be alive. To pursue death, the opposite of life, contradicts the very notion of acting, which has life as a prerequisite.  Once we have accepted that premise, an entire system of logical deductions follows with regard to how we ought to approach the external world – the pursuit of knowledge, interactions with others, improvement of living conditions, protection against danger. The physical sciences can provide many of the empirical data and regularities needed to assess alternative ways of living and to develop optimal solutions to human challenges. But ethics is needed to keep the goals of scientific study in mind. The goals should ultimately relate to ways to enhance human well-being. If the pursuit of human well-being – consistent with the imperative of each individual to continue living – is abandoned, then the physical sciences alone cannot provide adequate guidance. Indeed, they can be utilized to produce horrors – as the development of nuclear weapons in the 20th century exemplified. Geopolitical considerations of coercive power and nationalism were permitted to overshadow humanistic considerations of life and peace, and hundreds of thousands of innocents perished due to a massive government-sponsored science project, while the fate of human civilization hung in the balance for over four decades.

The questions cited by Hawking are indeed philosophical questions, at least in part. Aspects of these questions, while they are broadly reliant on the existence of an objective reality, do not require specific experiments to answer. Rather, like many of the everyday questions of our existence, they rely only on the ubiquitous inputs of our day-to-day experience, generalized within our minds and formulated as starting premises for a logical deductive process. The question “How can we understand the world in which we find ourselves? has different answers based on the realm of focus and endeavor. Are we looking to understand the function of a mechanism, or the origin of a star? Different tools are required for each, but systematic experimentation and observation would be required in each case. This is an opening for the physical sciences and the scientific method. There are, however, ubiquitous observations about our everyday world that can be used as inputs into our decision-making – a process we engage in regularly as we navigate a room, eat a meal, engage in conversation or deliberation, or transport any object whatsoever. Simply as a byproduct of routine living, these observations provide us with ample data for a series of logical deductions and inferences which do not strictly belong to any scientific branch, even though specific parts of our world could be better understood from closer scientific observation.

The questionHow does the universe behave?actually arises in part from a philosophical presupposition that “the universe” is a single entity with any sort of coordinated behavior whatsoever. An alternative view – which I hold – is that the word “universe” is simply convenient mental shorthand for describing the totality of every single entity that exists, in lieu of actually enumerating them all. Thus, while each entity has its own definite nature, “the universe” may not have a single nature or behavior. Perhaps a more accurate framing of that question would be, “What attributes or behaviors are common to all entities that exist?” To answer that question, a combination of ubiquitous observation and scientific experimentation is required. Ubiquitous observation tells us that all entities are material, but only scientific experimentation can tell us what the “building blocks” of matter are. Philosophy alone cannot recommend any model of the atom or of subatomic particles, among multiple competing non-contradictory models. Philosophy can, however, rightly serve to check the logical coherence of any particular model and to reject erroneous interpretations of data which produce internally contradictory answers. Such rejection does not mean that the data are inaccurate, or even that a particular scientific theory cannot predict the behavior of entities – but rather that any verbal understanding of the accurate data and predictive models should also be consistent with logic, causation, and everyday human experience. At the very least, if a coherent verbal understanding is beyond our best efforts at present, philosophy should be vigilant against the promulgation of incoherent verbal understandings. It is better to leave certain scientific models as systems of mathematical equations, uncommented on, than to posit evidently false interpretations that undermine laypeople’s view of the validity of our very existence and reasoning.

After all – to return to the ethical purpose of science – one major goal of scientific inquiry is to understand and explain the world we live in and experience on a daily basis. If any scientific model is said to result in the conclusion that our world does not ‘really’ exist or that our entire experience is illusory (rather than just occasional quirks in our biology, such as those which produce optical illusions, misleading us, in an avoidable manner, under specific unusual circumstances), then it is the philosophical articulation of that model that is flawed. The model itself may be retained in another form – such as mathematical notation – that can be used to predict and study phenomena which continue to defy verbal understanding, with the hope that someday a satisfactory verbal understanding will be attained. Without this philosophic vigilance, scientific breakthroughs may be abused by charlatans for the purpose of misleading people into ruining their lives. As a prominent example of this, multiple strains of mysticism have arisen out of bad philosophical interpretations of quantum mechanics – for instance, the belief, articulated in such pseudo-self-help books as The Secret, that people can mold reality with their thoughts alone and that, instead of working hard and thinking rationally, they can become immensely wealthy and cure themselves of cancer just by wanting it enough. Without a rigorous philosophical defense of reason and objective reality, either by scientists themselves or by their philosopher allies, this mystical nonsense will render scientific enterprises increasingly misunderstood by and isolated from large segments of the public, who will become increasingly superstitious, anti-intellectual, and reliant on wishful thinking.

The question “What is the nature of reality?” is a partly philosophical and partly scientific one. The philosophical dimension – metaphysics – is needed to posit that an objective, understandable reality exists at all. The scientific dimension comes into play in comprehending specific real entities, from stars to biological organisms – relying on the axioms and derivations of metaphysics for the experimental study of such entities to even make sense or promise to produce reliable results. Philosophy cannot tell you what the biological structure of a given organism is like, but it can tell you that there is one, and that praying or wishing really hard to understand it will not reveal its identity to you. Philosophy can also tell you that, in the absence of external conditions that would dramatically affect that biological structure, it will not magically change into a dramatically different structure.

The questions “Where did all this come from? Did the universe need a creator?” are scientific only to a point. When exploring the origin of a particular planet or star – or of life on Earth – they are perfectly amenable to experimentation and to extrapolation from historical evidence. Hence, the birth of the solar system, abiogenesis, and biological evolution are all appropriate subjects of study for the hard sciences. Moreover, scientific study can address the question of whether a particular object needed to have a creator and can, for instance, conclude that a mechanical watch needed to have a watchmaker, but no analogous maker needed to exist to bring about the structure of a complex biological organism. However, if the question arises as to whether existence itself had an origin or needed a creator, this is a matter for philosophy. Indeed, rational philosophy can point out the contradiction in the view that existence itself could ever not have existed, or that a creator outside of existence (and, by definition, non-existent at that time) could have brought existence into being.

Interestingly enough, Hawking comes to a similar conclusion – that cosmological history can be understood by a model that not include a sentient creator. I am glad that Hawking holds this view, but this specific conclusion does not require theoretical or experimental physics to validate; it simply requires a coherent understanding of terms such as “existence”, “universe”, and “creator”. Causation and non-contradiction both preclude the possibility of any ex nihilo creation. As for the question of whether there exist beings capable of vast cosmic manipulations and even the design of life forms – that is an empirical matter. Perhaps someday such beings will be discovered; perhaps someday humans will themselves become such beings through mastery of science and technology. The first steps have already been taken – for instance, with Craig Venter’s design of a synthetic living bacterium. Ethics suggests to me that this mastery of life is a worthwhile goal and that its proponents – transhumanists – should work to persuade those philosophers and laypeople who disagree.

More constructive dialogue between rational scientists and rational philosophers is in order, for the benefit of both disciplines. Philosophy can serve as a check on erroneous verbal interpretations of scientific discoveries, as well as an ethical guide for the beneficial application of those discoveries. Science can serve to provide observations and regularities which assist in the achievement of philosophically motivated goals. Furthermore, science can serve to disconfirm erroneous philosophical positions, in cases where philosophy ventures too far into specific empirical predictions which experimentation and targeted observation might falsify. To advance such fruitful interactions, it is certainly not productive to proclaim that one discipline or another is “dead”. I will be the first to admit that contemporary philosophy, especially of the kind that enjoys high academic prestige, is badly in need of reform. But such reform is only possible after widespread acknowledgment that philosophy does have a legitimate and significant role, and that it can do a much better job in fulfilling it.