Browsed by
Tag: individualism

Oppose Fascism of the Right and the Left – Article by Ron Paul

Oppose Fascism of the Right and the Left – Article by Ron Paul

The New Renaissance Hat
Ron Paul
August 26, 2017
******************************

Following the recent clashes between the alt-right and the group antifa, some libertarians have debated which group they should support. The answer is simple: neither. The alt-right and its leftist opponents are two sides of the same authoritarian coin.

The alt-right elevates racial identity over individual identity. The obsession with race leads them to support massive government interference in the economy in order to benefit members of the favored race. They also favor massive welfare and entitlement spending, as long as it functions as a racial spoils system. Some prominent alt-right leaders even support abortion as a way of limiting the minority population. No one who sincerely supports individual liberty, property rights, or the right to life can have any sympathy for this type of racial collectivism.

Antifa, like all Marxists, elevates class identity over individual identity. Antifa supporters believe government must run the economy because otherwise workers will be exploited by greedy capitalists. This faith in central planning ignores economic reality, as well as the reality that in a free market employers and workers voluntarily work together for their mutual benefit. It is only when the central government intervenes in the economy that crony capitalists have the opportunity to exploit workers, consumers, and taxpayers. Sadly, many on the left confuse the results of the “mixed economy” with free markets.

Ironically, the failure of the Keynesian model of economic authoritarianism, promoted by establishment economists like Paul Krugman, is responsible for the rise of the alt-right and antifa. Despite a recent (and likely short-lived) upturn in some sectors of the economy, many Americans continue to struggle with unemployment and a Federal Reserve-caused eroding standard of living. History shows that economic hardship causes many to follow demagogues offering easy solutions and convenient scapegoats.

Left-wing demagogues scapegoat businesses and the “one percent,” ignoring the distinction between those who made their fortunes serving consumers and those who enriched themselves by manipulating the political process. Right-wing demagogues scapegoat immigrants and minorities, ignoring how these groups suffer under the current system and how they are disproportionally impacted by policies like the war on drugs and police militarization.

As the Keynesian-Krugman empire of big government and fiat currency collapses, more people will be attracted to authoritarianism, leading to an increase in violence. The only way to ensure the current system is not replaced with something even worse is for those of us who know the truth to work harder to spread the ideas of liberty.

While we should be willing to form coalitions with individuals of good will across the political spectrum, we must never align with anyone promoting violence as a solution to social and economic problems. We must also oppose any attempts to use the violence committed by extremists as a justification for expanding the police state or infringing on free speech. Laws against hate speech set a dangerous precedent for censorship of speech unpopular with the ruling elite and the deep state.

Libertarians have several advantages in the ideological battle over what we will replace the Keynesian welfare model with. First, we do not need to resort to scapegoating and demagoguing, as we have the truth about the welfare-warfare state and the Federal Reserve on our side. We also offer a realistic way to restore prosperity. But our greatest advantage is that, while authoritarianism divides people by race, class, religion, or other differences, the cause of liberty unites all who seek peace and prosperity.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.
***
This article is reprinted with permission from the Ron Paul Institute for Peace and Prosperity.
What the Self-Esteem Movement Got Disastrously Wrong – Article by Dan Sanchez

What the Self-Esteem Movement Got Disastrously Wrong – Article by Dan Sanchez

The New Renaissance Hat
Dan Sanchez
******************************

One of Saturday Night Live’s most popular skits in the early 90s was a mock self-help show called “Daily Affirmation with Stuart Smalley.” Smalley, played by now-Senator Al Franken, would begin each show by reciting into the mirror, “I’m good enough, I’m smart enough, and, doggone it, people like me.”

This was a spoof of the “self-esteem movement,” which in the 80s had been all the rage. In that decade, self-esteem became a hot topic for motivational speakers and almost a book genre unto itself. In 1986, California even established a self-esteem “State Task Force.” But by the next decade, the movement had degenerated into an easy late-night punchline. Even today, Smalley’s simpering smile is the kind of image that the term “self-esteem” evokes for many.

Generation Barney

The self-esteem movement is also widely blamed for its influence on American schools and families. In the name of building self-esteem, teachers and parents showered children with effusive, unconditional praise. In the name of protecting self-esteem, kids were sheltered from any criticism or adverse consequences. The sugary rot spread to children’s television as well. Many of today’s young adults were raised on Barney the Dinosaur, who gushed with “feel-good” affirmations just as sappy as Smalley’s.

I am reminded of a moment from my own education career in the early 2000s. I had designed a classroom game for preschoolers, and one of my colleagues, a veteran early childhood educator, objected that my game involved competition and winners. “Your game can’t have a winner, because that means other kids will be losers,” she explained.

According to critics, this kind of mollycoddling has yielded a millennial generation full of emotionally fragile young adults who, in the workplace, expect praise and affirmation simply for showing up, and who can’t cope with (much less adapt to) constructive criticism. It is also partially blamed for the rise of politically-correct university “snowflakes” (aka “crybullies”) and their petulant demands for “safe spaces” on campus.

An Unknown Ideal

Ironically, these criticisms would be heartily endorsed by the father of the self-esteem movement. The whole thing was kicked off by an influential 1969 book titled The Psychology of Self-Esteem, written by Nathaniel Branden (1930-2014), a psychotherapist and one-time colleague and lover of Ayn Rand. It was the first of a long series of books by Branden about self-esteem, which included The Disowned Self (1971), Honoring the Self (1983), How To Raise Your Self-Esteem (1987), and The Power of Self-Esteem (1992).

In The Six Pillars of Self-Esteem (1994), his definitive book on the subject, Branden expressed deep dissatisfaction with prevailing discussions of the concept, especially after the movement became an explosive fad in the 80s. In that period, the concept of self-esteem was distorted by what Branden called “the oversimplifications and sugar-coatings of pop psychology.” Branden declared that:

“I do not share the belief that self-esteem is a gift we have only to claim (by reciting affirmations, perhaps). On the contrary, its possession over time represents an achievement.” [Emphasis added here and below.]

As Branden understood and explained it, self-esteem was an action-oriented, tough-minded concept. If Branden had been Stuart Smalley’s therapist, he would have advised him to stop mouthing empty self-compliments into the mirror and instead to start building real self-esteem through deep reflection and concrete action.

Branden especially deplored how badly education reformers were getting self-esteem wrong. He wrote:

“We do not serve the healthy development of young people when we convey that self-esteem may be achieved by reciting “I am special” every day, or by stroking one’s own face while saying ‘I love me’…”

He elaborated that:

“I have stressed that ‘feel good’ notions are harmful rather than helpful. Yet if one examines the proposals offered to teachers on how to raise students’ self-esteem, many are the kind of trivial nonsense that gives self-esteem a bad name, such as praising and applauding a child for virtually everything he or she does, dismissing the importance of objective accomplishments, handing out gold stars on every possible occasion, and propounding an ‘entitlement’ idea of self-esteem that leaves it divorced from both behavior and character. One of the consequences of this approach is to expose the whole self-esteem movement in the schools to ridicule.”

Branden further clarified:

“Therefore, let me stress once again that when I write of self-efficacy or self-respect, I do so in the context of reality, not of feelings generated out of wishes or affirmations or gold stars granted as a reward for showing up. When I talk to teachers, I talk about reality-based self-esteem. Let me say further that one of the characteristics of persons with healthy self-esteem is that they tend to assess their abilities and accomplishments realistically, neither denying nor exaggerating them.”

Other-Esteem

Branden also criticized those who:

“…preferred to focus only on how others might wound one’s feelings of worth, not how one might inflict the wound oneself. This attitude is typical of those who believe one’s self-esteem is primarily determined by other people.”

Indeed, what most “self-esteem” advocates fail to understand is that other-reliant “self-esteem” is a contradiction in terms. Far from building self-esteem, many of the counselors, teachers, and parents of yesteryear obstructed its growth by getting kids hooked on a spiritual I.V. drip of external validation. Instead of self-esteem, this created a dependence on “other-esteem.”

It is no wonder then that today we are faced with the (often exaggerated) phenomenon of young, entitled, high-maintenance validation-junkies in the classroom and the workplace. Their self-esteem has been crippled by being, on the one hand, atrophied by the psychic crutches of arbitrary authoritarian approval, and, on the other hand, repeatedly fractured by the psychic cudgels of arbitrary authoritarian disapproval.

Almost entirely neglected has been the stable middle ground of letting children learn to spiritually stand, walk, and run on their own: to build the strength of their self-esteem through the experience of self-directed pursuits, setting their own standards, and adapting to the natural consequences of the real world.

Branden also noted that self-esteem is not promoted by:

“…identifying self-worth with membership in a particular group (“ethnic pride”) rather than with personal character. Let us remember that self-esteem pertains to that which is open to our volitional choice. It cannot properly be a function of the family we were born into, or our race, or the color of our skin, or the achievements of our ancestors. These are values people sometimes cling to in order to avoid responsibility for achieving authentic self-esteem. They are sources of pseudo self-esteem. Can one ever take legitimate pleasure in any of these values? Of course. Can they ever provide temporary support for fragile, growing egos? Probably. But they are not substitutes for consciousness, responsibility, or integrity. They are not sources of self-efficacy and self-respect. They can, however, become sources of self-delusion.”

This helps to explain the emotional fragility of young people obsessed with “identity politics,” especially the perverse pride in group victimhood that pervades the campus left. It also speaks to the agitation and resentment of today’s crop of white nationalists and other right-wing “identitarians.” As Ayn Rand wrote:

“The overwhelming majority of racists are men who have earned no sense of personal identity, who can claim no individual achievement or distinction, and who seek the illusion of a “tribal self-esteem” by alleging the inferiority of some other tribe.”

Authentic self-esteem promotes, not codependency and fragility, but independence, enterprise, resilience, adaptability, and a growth mindset: exactly the character traits that individuals, young and old, need more of in today’s economy and political climate.

It is nothing short of tragic that the confusions of the so-called self-esteem movement have turned an indispensable concept into an object of ridicule and blame. Far from being the source of our problems, self-esteem is the missing solution.

dan-sanchezDan Sanchez

Dan Sanchez is Managing Editor of FEE.org. His writings are collected at DanSanchez.me.

This article was originally published on FEE.org and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author. Read the original article.

A Transhumanist Opinion on Privacy – Article by Ryan Starr

A Transhumanist Opinion on Privacy – Article by Ryan Starr

The New Renaissance HatRyan Starr

******************************

Privacy is a favorite topic of mine. Maintaining individual privacy is a crucial element in free society. Yet there are many who want to invade it for personal or political gain. As our digital fingerprint becomes a part of our notion of self, how do we maintain our personal privacy on an inherently impersonal network of data? Where do we draw that line on what is private, and how do we enforce it? These are questions that are difficult to answer when looking at a short-term perspective. However, if we look further into the probable future, we can create a plan that helps protect the privacy of citizens today and for generations to come. By taking into account the almost certain physical merger of human biology and technology, the answer becomes clear. Our electronic data should be treated as part of our bodily autonomy.

The explosive success of social media has shown that we already view ourselves as partly digital entities. Where we go, what we eat, and who we are with is proudly displayed in cyberspace for eternity. But beyond that we store unique data about ourselves “securely” on the internet. Bank accounts, tax returns, even medical information are filed away on a server somewhere and specifically identified as us. It’s no longer solely what we chose to let people see. We are physical and digital beings, and it is time we view these two sides as one before we take the next step into enhanced humanity.

Subdermal storage of electronic data is here, and its storage capabilities will expand rapidly. Soon we will be able to store a lot more than just access codes for our doors. It is hard to speculate exactly what people will chose to keep stored this way, and there may even come a time when what we see and hear is automatically stored this way. But before we go too far into what will be stored, we must understand how this information is accessed in present time. These implants are currently based in NFC technology. Near-Field Communication is a method of storing and transmitting data wirelessly within a very short distance. Yes, “wireless” is the key word. It means that if I can connect my NFC tag to my smart phone by just waiving my hand close to it (usually within an inch or so), then technically someone else can, too. While current antenna limitations and the discreetness of where a person’s tag is implanted create a highly secure method of storage, advances in technology will eventually make it easier to access the individual. This is why it is urgent we develop a streamlined policy for privacy.

The current Transhumanist position is that personally collected intellectual property, whether stored digitally or organically, is the property of the individual. As such, it should be protected from unauthorized search and download. The current platform also states that each individual has the freedom to enhance their own body as they like so long as it doesn’t negatively impact others. However, it does not specify what qualifies as a negative impact or how to prevent it. Morphological freedom is a double-edged sword. A person can a person enhance their ability to access information on themselves, but they can also use it to access others. It is entirely feasible enhancements will be created that allow a person to hack another. And collecting personal data isn’t the only risk with that. What if the hacking victim has an artificial heart or an implanted insulin pump? The hacker could potentially access the code the medical device is operating with and change or delete it, ultimately leading to death. Another scenario might be hacking into someone’s enhanced sensory abilities. Much like in the novel Ender’s Game, a person can access another to see what they see. This ability can be abused countless ways ranging from government surveillance to sexual voyeurism. While this is still firmly within the realm of science fiction, a transhuman society will need to create laws to protect against these person-to-person invasions of privacy.

Now let’s consider mass data collection. Proximity beacons could easily and cheaply be scattered across stores and cities to function as passive collection points much like overhead cameras are today. Retail stands to gain significantly from this technology, especially if they are allowed access to intimate knowledge about customers. Government intelligence gathering also stands to benefit from this capability. Levels of adrenaline, dopamine, and oxytocin stored for personal health analysis could be taken and paired with location data to put together an invasive picture of how people are feeling in a certain situation. Far more can be learned and exploited when discreetly collected biodata is merged with publicly observable activity.

In my mind, these are concerns that should be addressed sooner than later. If we take the appropriate steps to preserve personal privacy in all domains, we can make a positive impact that will last into the 22nd century.
***
Ryan Starr is the leader of the Transhumanist Party of Colorado. This article was originally published on his blog, and has been republished here with his permission.
A Totalitarian State Can Only Rule a Desperately Poor Society – Article by Ryan Miller

A Totalitarian State Can Only Rule a Desperately Poor Society – Article by Ryan Miller

The New Renaissance HatRyan Miller
******************************

I recently finished Anthem by Ayn Rand. In this short novella she tells the story of Equality 7-2521 (later called Prometheus), a man living in a dystopian collectivist society which has eclipsed the individual to such a degree that words such as “I” and “my” no longer even exist. The story is about Prometheus’ discovery of himself as an individual and of the world as it was before.

In this society babies are taken immediately from their “parents”, who were assigned to one another by the Council of Eugenics for the sole purpose of breeding, raised in the Home of Infants and then in the Home of Students, and then finally assigned their life-long profession at the age of 15 by the Council of Vocations. Everything is done for the supposed benefit of your brothers, preference is not allowed, superior ability is not allowed, and back-breaking toil is praised as such and not as a way to improve your own or humanity’s situation.

Dictatorship Means Poverty

But what is striking about this story is how accurately it portrays how the world would look under such life-throttling conditions. The Home of the Scholars is praised for having only recently (100 years ago) (re)invented marvels such as candles and glass. Since the times before the “Great Rebirth” and the discovery of the “Great Truth”—namely, “that all men are one and there is no will save the will of all men together”—humanity has, in reality, lost the progress of thousands of years and has reverted back to a time before even such basic utilities as oil lamps or clocks.

But Ayn Rand’s genius is that this is exactly what would happen to the world should it ever discover and truly act upon this “Great Truth.” Yet this is not typically how dystopian stories portray this type of society. Stories such as Brave New World1984The GiverDivergentEquilibrium, and many others, all love to show some type of ultra-technologically-advanced world in the backdrop of total or near total oppression, suppression of the individual, and enforcement of conformity.

Despite the almost total (and often drug-induced) destruction of individual will, drive, and creativity, these societies have reached unprecedented levels of technological competence. This is especially true when one considers when many of these stories were written.

In Brave New World, written in 1931, everyone has a personal helicopter, science has advanced to such a degree that mothers and fathers are no longer necessary parts of the breeding process, and everyone is kept docile and happy by the apparently side-effect lacking drug Soma.

In 1984 (published in 1949) there are two way telescreens, miniscule microphones and cameras, and speak/writes which turn whatever you say into text. In the other stories technology is advanced enough to, among other things, control weather (The Giver), give kids serum-induced psychological aptitude tests (Divergent), and to completely suppress emotions (Equilibrium). In addition to these there are countless other inventions or practices in these stories and the many others of the dystopian future genre.

Invention Requires Freedom

The question that needs to be asked, however, is who invented all these things? These marvel feats, which in the stories are often used for the end of some malevolent goal, are really all potentially awesome, or at least highly complex and complicated, inventions or innovations. Their conception and ultimate realization would have required years of thought, testing, failure, tinkering, and then success—things which all require individual ingenuity, creativity, and the incentives arising from the prospect of individual pride and gain.

Every great break-through in history was achieved by some odd-ball going against the grain or traditionally accepted view of things in their particular field. If they had done things the way people had always done them, they would never have had the ability to think outside the box and discover or create a unique solution to the problem at hand. Inventors and innovators need their quirkiness, eccentricity, social awkwardness, or will and ability to stand up to the existing order. And they need that coupled with the idea that they have something to gain.

But all of these stories, to different degrees, have built societies that destroy our differences, our emotions, our passions, our ability to think differently, and our incentives to create if were even able to.

So where do these advanced societies come from? Sure they could drink from the well of wealth created by the society that may have preceded it, but only for a while. It would eventually dry up. And without new generations of ambitious and intelligent dreamers, tinkerers, outside-the-boxers, there would be no one around to rebuild the wealth. This is the world that Ayn Rand creates in Anthem. The hopeless world without individuals.

The existence of advanced societies in many dystopian stories is reminiscent of the problem with the thinking in our world today and in the past: the thinking that things “just happen”—that innovation, invention, and progress are phenomena which occur naturally, regardless of conditions. Though the worlds portrayed in these other novels are far from desirable, the progress alone that the societies in them have reached is a reflection of this idea that most people, at least passively or unknowingly, buy into.

In reality the world would look much more like that of Anthem.

 

ryan_miller

Ryan Miller is a University of Michigan graduate, freelance translator, and aspiring blogger. He is also a Praxis participant in the September 2016 cohort.

This article was originally published on FEE.org. Read the original article.

Bring Back Classical Feminism – Article by Eileen L. Wittig

Bring Back Classical Feminism – Article by Eileen L. Wittig

The New Renaissance HatEileen L. Wittig
******************************

As a college-educated, employed young woman with Opinions, my conversations and social media posts should be filled with angry tirades against The Oppressive Male and strong words demanding Women’s Rights, preferably at the expense of Men’s Rights. Right?

How sexist of you to assume so. Or it would be, if third-wave feminism hadn’t declared that anyone without those opinions is himself (or herself, to be fair) sexist.

Flipping the Tables

Feminism, as with all movements, has become more and more radical with each wave. Members see themselves go from -5 to 0 to break even, and they decide they want to try to get to +5. Sometimes this escalation is good; or, at least, not harmful. But when it progresses to the point where it commits all the wrongdoings it was originally meant to protest, it becomes a problem.

Feminism has gone from being a movement for equality, to a movement for supremacy. What was feminism, originally? It was a movement that advanced “the radical notion that women are people”: individuals just as deserving of life, liberty, and property as men.

What does feminism mean, now? It is a movement that advocates the radical notion that men are lesser than women as people; that men are less deserving of life, liberty, and property than women; that they are entitled to things just for being women; that one sex is better than the other, just ‘cause. It has gone from being a movement for equality, to a movement for supremacy.

This is stupid.

It seems counterintuitive that I, a woman, who would profit from this inequality, thinks that this is stupid. But I don’t profit. No one does. Moreover, it makes my life complicated, and the whole thing is insulting.

The Modern Manifestations

According to third-wave feminism, I should want to be paid more for simply being a woman; possibly to make up for the many years that women were paid less than men as a matter of course. But the whole idea of being given a raise or promotion based on gender is insulting to my abilities as a person (yes, and as a woman).

I do not need to be given that promotion thanks to something I can’t even help. I can earn it, thank you very much. My brain is more than capable; I don’t need the physical attributes of my double-X chromosomes to do it for me.

Third-wave feminists intimidate me. Just as men do, I get a little nervous when a woman emphatically identifies herself as a feminist to me. I spend the rest of the time waiting to be accused of hating my sex, being a barnacle on the wheel of progress, and just generally being a horrible example of womanhood.

I’m a feminist, but I’m a “classical feminist” – I am all for voting rights and equal pay and the cultural destruction of everyday sexism. I get mad when a man praises me in a voice dripping with patronage, all but resisting the urge to vocalize the implied clause, “…for a girl.”

I’m tired of the double standard we have for sexual promiscuity, one that blames and even attacks women while men are excused for “just being men.” But don’t lower your expectations for me to the standards for men – raise theirs to mine!

I ardently want real equality between the sexes. But to third-wavers, that often does not make me feminist enough.

Third-wave feminists intimidate men, too. That’s the point. But then what happens? Men stop stepping up. They back off, they stop trying, they become enfeebled. Suddenly humanity’s “other half” becomes less productive, less interesting, and more pathetic. Women, feminists included, then have to contribute much more heavily to the economy and society to support the weaker, less productive half they created. They would hate that. So would I.

Men descending to a lower level does not raise women to a higher one: quite the opposite.

Third-wave feminism says I should hate men. I’m supposed to think they’re big, stupid oafs. I don’t. I think men are wonderful. I have a lot of close male friends who I would trust with my life if I had to. I don’t understand them all the time, but I don’t understand women all the time either. I don’t even understand myself all the time. But I don’t hate myself.

Peaceful Feminism

As Ludwig von Mises said in 1922, when my classical feminist forbears were paving the way for their radical followers, “So far as Feminism seeks to adjust the legal position of woman to that of man, so far as it seeks to offer her legal and economic freedom to develop and act in accordance with her inclinations, desires, and economic circumstances – so far it is nothing more than a branch of the great liberal movement, which advocates peaceful and free evolution.”

Fighting for women’s equality is a wonderful thing. But it only works if it’s fighting for equality, not total dominance, to the detriment of everyone else, without earning it. It’s a fine line that needs to be walked in every aspect of life. We need to get used to doing it, but it is possible.

Eileen_WittigEileen L. Wittig

Eileen Wittig is the Associate Editor at the Foundation for Economic Education.

This article was originally published on FEE.org. Read the original article.

Enlightened Selfies – Narcissism and Human Rights – Article by B.K. Marcus

Enlightened Selfies – Narcissism and Human Rights – Article by B.K. Marcus

The New Renaissance Hat
B.K. Marcus
******************************

When the press refers to “Generation Selfie,” do we sense a sneer? It’s almost as if the term selfie is shorthand not for self-portrait but for self-involved, self-absorbed, or simply selfish.

Selfies are widespread among millennials, many of whom grew up with camera phones. A poll commissioned by electronics maker Samsung reveals that fully 30 percent of all photos taken by 18- to 24-year-olds are selfies. For many of us, the selfie is just the new normal, whether or not we fill our own smartphones with self-regarding snapshots. But, as Pamela Rutledge writes for Psychology Today, some see the selfie generation “as proof of cultural — or at least generational — narcissism and moral decline.” And calling Generation Selfie a bunch of narcissists may not be rhetorical excess: according to a paper in the journal Personality and Individual Differences, selfie-posting behavior is indeed associated with narcissistic personality disorder.

Does this mean that modern society is growing more self-obsessed?

Belief in the “moral decay” epitomized by self-directed amateur photography results from a more general conviction that the virtues of community and altruism are being driven out by our culture’s overemphasis on the individual. Whether the culprit is capitalism, technology, or Western civilization more generally, the idea is that historically recent developments are fracturing our communal bonds and leading to a loss of empathy, compassion, and duty — replacing concern for the well-being of a larger group with a privileging of the atomized individual.

Inventing the Modern Self

But the development is not, in fact, historically recent. The selfie as we now know it may seem like a result of social media and the camera phone, but our society’s apparent obsession with visual self-presentation is much older — and significantly more beneficial — than the critics understand.

“It’s easy to make fun of our penchant for taking selfies,” writes popular science author Steven Johnson in How We Got to Now: Six Innovations That Made the Modern World, “but in fact there is a long and storied tradition behind that form of self-expression.”

The original selfie generation emerged in Renaissance Italy, the product of a different technological innovation. Centuries before the bidirectional camera phone, there was the culturally disruptive technology of the glass mirror.

“The interesting thing about self-portraiture,” Johnson tells us, “is that it effectively doesn’t exist as an artistic convention in Europe before 1400.” That’s because, for most of human history, we got very few chances to see ourselves as others see us. The best we could do was a rippled reflection glimpsed in water or a tarnished image on a metal pot.

That all changed when glassmakers “figured out a way to combine their crystal-clear glass with a new innovation in metallurgy, coating the back of the glass with an amalgam of tin and mercury to create a shiny and highly reflective surface. For the first time, mirrors became part of the fabric of everyday life.”

One result was the invention of linear perspective in painting. Prior to the Renaissance, visual representation was more symbolic, less what we would now call realistic. Renaissance artists used the new technology of the mirror to compare what they put on the canvas with what they saw framed in the glass. Sometimes, of course, what they saw in the looking glass was their own reflection.

“The mirror helped invent the modern self,” Johnson writes, “in some real but unquantifiable way.”

Soon after, “social conventions as well as property rights and other legal customs began to revolve around the individual rather than the older, more collective units: the family, the tribe, the city, the kingdom.” Furthermore, “orienting laws around individuals led directly to an entire tradition of human rights and the prominence of individual liberty in legal codes.”

Inventing Humanity

In a different investigation of the individualist tradition, historian Lynn Hunt observes in her book Inventing Human Rights, “For rights to be human rights, all humans everywhere in the world must possess them equally and only because of their status as human beings.”

There was no such understanding of humanity for most of history. Love, compassion, and sympathy may have existed from the beginning, but only between people in narrowly defined groups. The treatment for outsiders was far more harsh.

Slavery and torture, today considered egregious violations of human rights, went unquestioned before a few hundred years ago. Since at least the time of Aristotle, it was typical to divide the world between “us,” the naturally civilized, and “them,” the naturally enslaved. Torture, in the ancient world, was originally limited to slaves, but over time, the practice became more acceptable, and in the second century it was expanded to include nominally free lower-class victims. By the Middle Ages, torture before execution had become a form of spectacle, public entertainment for the whole family.

How, then, did we develop a sense of universal humanity and of natural rights for all human beings? Specifically, Hunt asks about the Enlightenment thinkers in 18th-century France and America, for whom such rights were “self-evident.”

How did these men, living in societies built on slavery, subordination, and seemingly natural subservience, ever come to imagine men not at all like them and, in some cases, women too, as equals?

And how did the 18th-century public come to agree with them? The answer Hunt offers is that the so-called self-evidence of individual human rights was largely the result of widespread reading in a genre that was still relatively new at the time: the epistolary novel. Enlightenment thinkers were familiar with first-person narrative in a way that earlier generations would have found alien. Novels introduced readers to the inner lives of characters unlike themselves.

Johnson, too, talks about the emergence of the novel and its impact on the moral imagination, but he traces the origins of the modern novel itself to the same innovation that gave rise to linear perspective in painting: “The psychological novel … is the kind of story you start wanting to hear once you begin spending meaningful hours of your life staring at yourself in the mirror.”

Thinking about themselves as the individuals staring back through the glass, “people began writing about their interior lives with far more scrutiny,” and “the novel emerged as a dominant form of storytelling, probing the inner mental lives of its characters with an unrivaled depth.”

The Innovation of Empathy

What does this sort of growing self-obsession have to do with the rights of others?

Empathy, Hunt points out, “requires a leap of faith, of imagining that someone else is like you.” This is the idea that “novels generated … by inducing new sensations about the inner self.”

Or as Johnson puts it,

Entering a novel, particularly a first-person narrative, was a kind of conceptual parlor trick: it let you swim through the consciousness, the thoughts and emotions, of other people more effectively than any aesthetic form yet invented.

Spending time in someone else’s head, even if that someone else is fictional, is practice for thinking about real people’s experiences.

So, according to this story, the technology of glass mirrors leads to linear perspective, the Renaissance, and a new literary form called the novel. The novel, in turn, transforms the popular imagination in such a way that even strangers — those outside your immediate family, class, and religious affiliation — come to be understood as autonomous individuals with their own inner lives, much like your own. And for the first time in history, people come to question the practices of torture and slavery, practices at least as old as civilization and far more universal than any understanding of rights prior to the Enlightenment.

The “invention” of the individual ushered in not ever more selfishness and less regard for the group, but an expanding empathy and a more inclusive, approaching universal, sense of “us” — a waning relegation of those outside our moral community.

Trading with the Other

Johnson notes that the technological innovation at the beginning of the story isn’t enough by itself to have produced the larger cultural shift:

The Renaissance also benefited from a patronage system that enabled its artists and scientists to spend their days playing with mirrors instead of, say, foraging for nuts and berries. A Renaissance without the Medici — not the individual family, of course, but the economic class they represent — is as hard to imagine as the Renaissance without the mirror.

The same can be said about the economy that produced the Enlightenment: it is hard to imagine an era of growing empathy, open-mindedness, and belief in universal rights without the market that provided a growing readership for the epistolary novel.

Capitalism establishes the conditions in which individualism can thrive. Individualism, in turn, helps the market economy to grow and to propagate the belief in rights-bearing individuals.

Contrary to today’s critics, then, who assume that the individualist mentality leads to the absence of empathy, with the advent of individualism came the invention of empathy, at least as applied to those outside the tribe, clan, or caste.

Maybe narcissists do feel compelled to photograph themselves. Camera phones make it easier than ever. But the myth of Narcissus is ancient. And the history of reflective technology points us to a different understanding of cause and effect for the selfie generation.

The 21st-century self-portrait can play the same role now as its earlier incarnations did during the Renaissance and the Enlightenment. Like individualism more generally, the selfie invites us to explore questions of identity and of where we fit in an ever more interconnected community.

B.K. Marcus is editor of the Freeman. His website is bkmarcus.com.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author.

Pope Francis vs. the Cure of Reason – Article by Edward Hudgins

Pope Francis vs. the Cure of Reason – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
September 27, 2015
******************************

A young girl was recently interviewed on TV about her encounter with Pope Francis on his visit to the United States. She cried with joy as she described how he touched her on the forehead and offered a blessing. Now, she said, she might get the miracle she’s prayed for. Maybe someday she’ll be able to walk.

Who could not be moved by a crippled child who wants to be cured? But what is really wrenching is the fact that this child and so many others look to faith rather than science and reason.

Medical breakthroughs

On the same day the Pope was touching the little girl, a news story was circulating about a breakthrough in prosthetics. A brain implant has restored to a man with a robotic hand his sense of touch.

Another story in recent months documented technology that allows individuals to control their artificial limbs with their thoughts.

Some even express fears that bionic legs in the future could be so good that they will be preferred to the natural ones we’re born with.

The sightless have sought divine intersession to cure blindness since before the time of Jesus. A few days before the Pope toured D.C., a breakthrough was announced that involves applying a light-sensitive protein found in algae to the back of the retinas of eyes to, in effect, replace the rods and cones destroyed by certain diseases. The technique has been successful in mice and human tests are now coming.


This restorative treatment has welcome competition. Last month saw a man receive the first bionic eye implant.

And let’s not forget that deafness is in the process of being vanquished thanks to cochlear implants.

Free markets needed

Free markets, of course, if allowed to operate, will make what are now pricey, experimental medical technologies affordable for most, just as markets have allowed entrepreneurs to create and bring down the prices of computers, smartphones, tablets, Wifi, and all the hardware and software of the information revolution.

Handicapped individuals, like the girl who was so happy the Pope touched her, might have bright futures indeed. But they need to recognize that it is not faith that will make them whole. It is reason.

Human reason needed

It is the power of the human mind, especially in science and engineering, that has brought about the benefits of our modern world. Yet where are the parades, the speeches before Congress, and the celebrations that recognize the sources of such benefits and encourage reason and achievement as foundational values in our culture? Why do so many seek hope in faith and otherworldly miracles when real achievements—“miracles” of the human mind—are all around us? Why do so few understand that training minds and encouraging entrepreneurship is the best way to ensure a healthy, prosperous future? With all the enthusiasm we see for the Pope, where is the enthusiasm for the actual creators and achievers in our world?

Ironically, the Pope, in his economic ignorance, denounces the free market system that could cure that little girl. And he promotes draconian economic restrictions to fight hypothesized global warming, restrictions that would ensure that the poor he says he cares so much about will be with us always. The Pope—and all of us—indeed should empathize with that little girl. But he should be touting reason as the cure. This Jesuit Pope needs to read his Thomas Aquinas!

Those who are enthusiastic about the Pope’s visit because he inspires hope for a better world had better look to the real source of all our blessings: the human mind.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

How Anti-Individualist Fallacies Prevent Us from Curing Death – Article by Edward Hudgins

How Anti-Individualist Fallacies Prevent Us from Curing Death – Article by Edward Hudgins

The New Renaissance HatEdward Hudgins
July 3, 2015
******************************

Are you excited about Silicon Valley entrepreneurs investing billions of dollars to extend life and even “cure” death?

It’s amazing that such technologically challenging goals have gone from sci-fi fantasies to fantastic possibilities. But the biggest obstacles to life extension could be cultural: the anti-individualist fallacies arrayed against this goal.

Entrepreneurs defy death

 A recent Washington Post feature documents the “Tech titans’ latest project: Defy death. “ Peter Thiel, PayPal co-founder and venture capitalist, has led the way, raising awareness and funding regenerative medicines. He explains: “I’ve always had this really strong sense that death was a terrible, terrible thing… Most people end up compartmentalizing and they are in some weird mode of denial and acceptance about death, but they both have the result of making you very passive. I prefer to fight it.”

Others prefer to fight as well. Google CEO Larry Page created Calico to invest in start-ups working to stop aging. Oracle’s Larry Ellison has also provided major money for anti-aging research. Google’s Sergey Brin and Facebook’s Mark Zuckerberg both have funded the Breakthrough Prize in Life Sciences Foundation.

Beyond the Post piece we can applaud the education in the exponential technologies needed to reach these goals by Singularity U., co-founded by futurist Ray Kurzweil, who believes humans and machines will merge in the decades to become transhumans, and X-Prize founder Peter Diamandis.

The Post piece points out that while in the past two-thirds of science and medical research was funded by the federal government, today private parties put up two-thirds. These benefactors bring their entrepreneurial talents to their philanthropic efforts. They are restless for results and not satisfied with the slow pace of government bureaucracies plagued by red tape and politics.

“Wonderful!” you’re thinking. “Who could object?”

Laurie Zoloth’s inequality fallacy

 Laurie Zoloth for one. This Northwestern University bioethicist argues that “Making scientific progress faster doesn’t necessarily mean better — unless if you’re an aging philanthropist and want an answer in your lifetime.” The Post quotes her further as saying that “Science is about an arc of knowledge, and it can take a long time to play out.”

Understanding the world through science is a never-ending enterprise. But in this case, science is also about billionaires wanting answers in their lifetimes because they value their own lives foremost and they do not want them to end. And the problem is?

Zoloth grants that it is ”wonderful to be part of a species that dreams in a big way” but she also wants “to be part of a species that takes care of the poor and the dying.” Wouldn’t delaying or even eliminating dying be even better?

The discoveries these billionaires facilitate will help millions of people in the long-run. But her objection seems rooted in a morally-distorted affinity for equality of condition: the feeling that it is wrong for some folks to have more than others—never mind that they earned it—in this case early access to life-extending technologies. She seems to feel that it is wrong for these billionaires to put their own lives, loves, dreams, and well-being first.

We’ve heard this “equality” nonsense for every technological advance: only elites will have electricity, telephones, radios, TVs, computers, the internet, smartphones, whatever. Yes, there are first adopters, those who can afford new things. Without them footing the bills early on, new technologies would never become widespread and affordable. This point should be blindingly obvious today, since the spread of new technologies in recent decades has accelerated. But in any case, the moral essential is that it is right for individuals to seek the best for themselves while respecting their neighbors’ liberty to do the same.

Leon Kass’s “long life is meaningless” fallacy

 The Post piece attributes to political theorist Francis Fukuyama the belief that “a large increase in human life spans would take away people’s motivation for the adaptation necessary for survival. In that kind of world, social change comes to a standstill.”

Nonsense! As average lifespans doubled in past centuries, social change—mostly for the better—accelerated. Increased lifespans in the future could allow individuals to take on projects spanning centuries rather than decades. Indeed, all who love their lives regret that they won’t live to see, experience, and help create the wonders of tomorrow.

The Post cites physician and ethicist Leon Kass who asks: “Could life be serious or meaningful without the limit of mortality?”

Is Kass so limited in imagination or ignorant of our world that he doesn’t appreciate the great, long-term projects that could engage us as individuals seriously and meaningfully for centuries to come? (I personally would love to have the centuries needed to work on terraforming Mars, making it a new habitat for humanity!)

Fukuyama and Kass have missed the profound human truth that we each as individuals create the meaning for our own lives, whether we live 50 years or 500. Meaning and purpose are what only we can give ourselves as we pursue productive achievements that call upon the best within us.

Francis Fukuyama’s anti-individualist fallacy

 The Post piece quotes Fukuyama as saying “I think that research into life extension is going to end up being a big social disaster… Extending the average human life span is a great example of something that is individually desirable by almost everyone but collectively not a good thing. For evolutionary reasons, there is a good reason why we die when we do.”

What a morally twisted reason for opposing life extension! Millions of individuals should literally damn themselves to death in the name of society. Then count me anti-social.

Some might take from Fukuyama’s premise a concern that millions of individuals living to 150 will spend half that time bedridden, vegetating, consuming resources, and not producing. But the life extension goal is to live long with our capacities intact—or enhanced! We want 140 to be the new 40!

What could be good evolutionary reasons why we die when we do? Evolution only metaphorically has “reasons.” It is a biological process that blindly adapted us to survive and reproduce: it didn’t render us immune to ailments. Because life is the ultimate value, curing those ailments rather than passively suffering them is the goal of medicine. Life extension simply takes the maintenance of human life a giant leap further.

Live long and prosper

 Yes, there will be serious ethical questions to face as the research sponsored by benevolent billionaires bears fruit. But individuals who want to live really long and prosper in a world of fellow achievers need to promote human life as the ultimate value and the right of all individuals to live their own lives and pursue their own happiness as the ultimate liberty.

Dr. Edward Hudgins directs advocacy and is a senior scholar for The Atlas Society, the center for Objectivism in Washington, D.C.

Copyright, The Atlas Society. For more information, please visit www.atlassociety.org.

Microaggressions and Microwonders – Are Mountains Out of Molehills Proof the World’s Getting Better? – Article by Steven Horwitz

Microaggressions and Microwonders – Are Mountains Out of Molehills Proof the World’s Getting Better? – Article by Steven Horwitz

The New Renaissance Hat
Steven Horwitz
May 27, 2015
******************************

A recurring theme of recent human history is that the less of something bad we see in the world around us, the more outrage we generate about the remaining bits.

For example, in the 19th century, outrage about child labor grew as the frequency of child labor was shrinking. Economic forces, not legislation, had raised adult wages to a level at which more and more families did not need additional income from children to survive, and children gradually withdrew from the labor force. As more families enjoyed having their children at home or in school longer, they became less tolerant of those families whose situations did not allow them that luxury, and the result was the various moral crusades, and then laws, against child labor.

We have seen the same process at work with cigarette smoking in the United States. As smoking has declined over the last generation or two, we have become ever less tolerant of those who continue to smoke. Today, that outrage continues in the form of new laws against vaping and e-cigarettes.

The ongoing debate over “rape culture” is another manifestation of this phenomenon. During the time that reasonably reliable statistics on rape in the United States have been collected, rape has never been less frequent than it is now, and it is certainly not as institutionalized as a practice in the Western world as it was in the past. Yet despite this decline — or in fact because of it — our outrage at the rape that remains has never been higher.

The talk of the problem of “microaggressions” seems to follow this same pattern. The term refers to the variety of verbal and nonverbal forms of communication that are said to constitute disrespect for particular groups, especially those who have been historically marginalized. So, for example, the use of exclusively masculine pronouns might be construed as a “microaggression” against women, or saying “ladies and gentlemen” might be seen as a microaggression against transsexuals. The way men take up more physical space on a train or bus, or the use of the phrase “walk-only zones” (which might offend the wheelchair-bound) to describe pedestrian crossways, are other examples.

Those who see themselves as the targets of microaggressions have often become very effective entrepreneurs of outrage in trying to parlay these perceived slights into indications of much more pervasive problems of sexism or racism and the like. Though each microaggression individually might not seem like much, they add up. So goes the argument.

I don’t want to totally dismiss the underlying point here, as it is certainly true that people say and do things (often unintentionally) that others will find demeaning, but I do want to note how this cultural phenomenon fits the pattern identified above. We live in a society in which the races and genders (and classes!) have never been more equal. Really profound racism and sexism is far less prominent today than it was 50 or 100 years ago. In a country where the president is a man of color and where one of our richest entertainers is a woman of color, it’s hard to argue that there hasn’t been significant progress.

But it is exactly that progress that leads to the outrage over microaggressions. Having steadily pushed back the more overt and damaging forms of inequality, and having stigmatized them as morally offensive, we have less tolerance for the smaller bits that remain. As a result, we take small behaviors that are often completely unintended as offenses and attempt to magnify them into the moral equivalent of past racism or sexism. Even the co-opting of the word “aggression” to describe what is, in almost all cases, behavior that is completely lacking in actual aggression is an attempt to magnify the moral significance of those behaviors.

Even if we admit that some of such behaviors may well reflect various forms of animus, there are two problems with the focus on microaggressions.

First, where do we draw the line? Once these sorts of behaviors are seen as slights with the moral weight of racism or sexism, we can expect to see anyone and everyone who feels slighted about anything someone else said or did declare it a “microaggression” and thereby try to capture the same moral high ground.

We are seeing this already, especially on college campuses, where even the mere discussion of controversial ideas that might make some groups uncomfortable is being declared to be a microaggression. In some cases this situation is leading faculty to stop teaching anything beyond the bland.

Second, moral equivalence arguments can easily backfire. For example, if we, as some feminists were trying to do in the 1980s, treat pornography as the equivalent of rape, hoping to make porn look worse, we might end up causing people to treat real physical rape less seriously given that they think porn is largely harmless.

So it goes with microaggressions: if we try to raise men taking up too much room on a bus seat into a serious example of sexism, then we risk people reacting by saying, “Well, if that’s what sexism is, then why should I really worry too much about sexism?” The danger is that when far more troubling examples of sexism or racism appear (for example, the incarceration rates of African-American men), we might be inclined to treat them less seriously.

It is tempting to want to flip the script on the entrepreneurs of microaggression outrages and start to celebrate their outrages as evidence of how far we’ve come. If men who take the middle armrest on airplanes (as obnoxious as that might be) are a major example of gender inequality, we have come far indeed. But as real examples of sexism and racism and the like do still exist, I’d prefer another strategy to respond to the talk of microaggressions.

Let’s spend more time celebrating the “microwonders” of the modern world. Just as microaggression talk magnifies the small pockets of inequality left and seems to forget the larger story of social progress, so does our focus on large social and economic problems in general cause us to forget the larger story of progress that is often manifested in tiny ways.

We live in the future that prior generations only imagined. We have the libraries of the world in our pockets. We have ways of easily connecting with friends and strangers across the world. We can have goods and even services of higher quality and lower cost, often tailored to our particular desires, delivered to our door with a few clicks of a button. We have medical advances that make our lives better in all kinds of small ways. We have access to a variety of food year-round that no king in history had. The Internet brings us happiness every day through the ability to watch numerous moments of humor, human triumph, and joy.

Even as we recognize that the focus on microaggressions means we have not yet eliminated every last trace of inequality, we should also recognize that it means we’ve come very far. And we should not hesitate to celebrate the microwonders of progress that often get overlooked in our laudable desire to continue to repair an imperfect world.

Steven Horwitz is the Charles A. Dana Professor of Economics at St. Lawrence University and the author of Microfoundations and Macroeconomics: An Austrian Perspective, now in paperback.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author.

A Portrait of the Classical Gold Standard – Article by Marcia Christoff-Kurapovna

A Portrait of the Classical Gold Standard – Article by Marcia Christoff-Kurapovna

The New Renaissance Hat
Marcia Christoff-Kurapovna
April 15, 2015
******************************

“The world that disappeared in 1914 appeared, in retrospect, something like our picture of Paradise,” wrote the economist Cecil Hirsch in his June 1934 review of R.W. Hawtrey’s classic, The Art of Central Banking (1933). Hirsch bemoaned the loss of the far-sighted restraint that had once prevailed among the “bankers’ banks” of the West, concluding that modern times “had failed to attain the standard of wisdom and foresight that prevailed in the 19th century.”That wisdom and foresight was once upon a time institutionalized throughout an international monetary culture — gold-based, wary of credit, and contemptuous of debt, public or private. This world included central banks including the Bank of England, the Bank of France, the Swiss National Bank, the early Federal Reserve, the Imperial Bank of Austria-Hungary, and the German Reichsbank. But the entrenched hard-money ideology of the time restrained all of them. The Bank of Russia, for example, which once required 50 percent to 100 percent gold backing of all notes issued, possessed the second largest gold reserves on the planet at the turn of the twentieth century.

“The countries that were tied together in the gold standard system represented to a not inconsiderable degree a community of interest in and responsibility for the maintenance of economic and financial stability throughout the world,” recounted Aldoph C. Miller, member of the Federal Reserve Board from 1914 to 1936, in The Proceedings of the Academy of Political Science, in May 1936. “The gold standard was the one outstanding symbol of unity and economic solidarity which the nineteenth century world had developed.”

It was a time when “automatic market forces,” as economists of the day referred to them, prevailed over monetary management. Redeemability of money in (fine) gold ensured, within limits, stability in foreign exchange rates. Credit was extended only as far as reserve ratios would allow, and central banks were required to keep fixed reserves of gold against notes-in-circulation and against demand deposits.

When Markets Dominated Monetary “Policy”

Gold flows regulated international price relationships through markets, which adjusted themselves accordingly: prices rose when there was an influx of gold — for example, when one country received a debt payment from another country (always in gold), or during such times as the California or Australian gold rushes of the 1870s. These inflows meant credit expansion and a rise in prices. An outflow of gold meant credit was contracted and price deflation followed.

The efficiency of that standard was not impeded by the major central banks in such a way that “any disturbance of economic or financial character originating at any point in the world which might threaten the continued maintenance of economic equilibrium was quickly detected by foreign exchanges,” Miller, the Federal Reserve board member, noted in his paper. “In this way, the gold standard system became in a very real sense a regime or rule of economic health, a method of catching economic disturbances in the bud.”

The Bank of England, the grand master of them all, was the financial center of the universe, whose tight handle on its credit policies was so disciplined that the secured the top spot while not even holding the largest gold reserves. Consistent in its belief that protection of reserves was the chief, and only important, criterion of credit policy, England became the leading exporter of capital, the free market for gold, the international discount market, and international banker for the trade of other countries, as well as her own. The world was in this sense on the sterling standard.

The Bank of France, wisely admonished by its founder, Napoleon, to make sure France was always a creditor country, was so replete with reserves it made England a 500 million franc loan (in 1915 numbers) at the onset of the World War I. Switzerland, perhaps the last “19th-century-style” hold-out today with unlimited-liability private bankers and strict debt-ceiling legislation, also required high standards of its National Bank, founded in 1907. By the 1930s that country had higher banking reserves than the US; the Swiss franc was never explicitly devalued, unlike nearly every other Western nation’s currency, and the country’s domestic price level remained the most stable in the world.

For a time, the disciplined mindset of these banks found its way across the Atlantic, where the idea of a central bank had been long the subject of hot debate in the US. The economist H. Parker Willis, writing about the controversy in The Journal of the Proceedings of the Academy of Political Science, October 1913, admonished: “The Federal Reserve banks are to be ‘bankers’ banks,’ and they are intended to do for the banker what he himself does for the public.”

At first, the advice was heeded: in September 1916, almost two years after its founding on December 23,1913, the fledgling Fed worked out an amendment to its gold policy on the basis of a very conservative view of credit. This new policy sought to restrain “the undue and unnecessary expansion of credit,” wrote Fed board member Miller, in an article for The American Economic Review, in June 1921.

The Bank of Russia, during the second half of the nineteenth century steered itself through the Crimean War, the Russo-Turkish War, the Russo-Japanese War, impending Balkan wars — not to mention all that was to follow — and managed to emerge with sound fiscal policies and massive gold reserves. According to The Economist of May 20, 1899, Russian holdings were 95 million pounds sterling of gold, while the Bank of France held 78 million sterling worth. (Austria-Hungary held 30 million sterling worth of gold and the Bank of England 30 million sterling worth of both gold and silver.) “Russia up to the very moment of rupture [with Japan, 1904–1905], was working imperturbably at the progressive consolidation of her finances,” reported Karl Helfferich of the University of Berlin, at a meeting of The Royal Economic Society [UK] in December 1904. “Even in years of industrial crises and defective harvest, her foreign trade showed an excess of exports over imports more than sufficient to compensate payments sent abroad. And, as guarantee her monetary system she has succeeded in a amassing and maintaining a vast reserve of gold.”

These banks, in turn, drew on the medieval/Renaissance and Baroque-era banking traditions of the Hanseatic League, the Bank of Venice, and Amsterdam banks. Payment-on-demand “in good and heavy gold” was like a blood-oath binding the banker-client relationship. The transfer of credit “did not arise from any such substitution of credit for money,” noted Charles F. Dunbar, in The Quarterly Journal of Economics of April 1892, “but from the simple fact that the transfer in-bank saved the necessity of counting coin and manual delivery of every transaction.”

Bankers were forbidden to deal in certain commodities, could not make loans or create credit for the purchase of such commodities, and forbade both foreigners and citizens from buying silver on credit unless the same amount in cash was in the bank. According to Dunbar, a Venetian law of 1403 on reserve requirements became the basis of US banking law on the deposits of public securities in the late 1800s.

After the fall of bi-metallism in the 1870s, gold continued to perform monetary functions among the main countries of the Western world (and the well-administered Bank of Japan). It was the only medium of exchange and the only currency with unrestricted legal tender. It became the vaunted “measure of value.” Bank currency notes were simply used as auxiliary to gold and, in general, did not enjoy the privilege of legal tender.

The End of An Era

It was certainly not a flawless system, or without periodic crises. But central banks had to act in an exceptionally prudent manner given the all-over public distrust of paper money.

As economist Andrew Jay Frame of the University of Chicago, writing in The Journal of Political Economy, in January 1912, noted: “During panics in Britain in 1847 and 1866, when cash payments were suspended, the floodgates of cash were opened [by The Bank of England], the governor sent word to the street that solvent banks would be accommodated, and the panic was relieved.” Frame then adds: “However, this extra cash and the increased loans that went with it were very quickly put to an end to avoid credit expansion.”

The US was equally confident of its prudent attitude. Aldoph Miller, writing of Federal Reserve policy, remarked: “The three chief elements of the policy of a central bank or system of reserve holding institutions are best disclosed in connection with the attitude towards 1) gold 2) currency 3) credit.” He noted proudly: “The federal reserve system has met [these] tests on the whole with remarkable success.”

But after World War I, a different international landscape was left behind. England had been displaced as the center of international finance; the US and France emerged as the chief post-war creditor countries. The mechanism of the gold standard to which depreciated currencies could be related no longer existed. Only the US was left with a full gold standard. England and France had a gold bullion standard and other countries (Germany, primarily) had a gold-exchange standard.

A matrix of unbalanced trade relationships began to saturate the international economy. Then, with so many foreign countries attendant upon its speculative boom, the US manipulated its own domestic credit policies to ease credit and exchange-standard controls. This eventually culminated in an international financial crisis of 1931. Under Bretton Woods (1944), the gold standard was effectively abandoned: domestic convertibility was illegal and the role of gold was very constrained in favor of the dollar.

“It was, at least in theory, simple enough in the old days,” wrote a wistful W. Randolph Burgess, head of the New York Federal Reserve, in 1938. “In the present strange new world, where the old gold portents have lost their former meaning, where is the radio beam which the central banker may follow? What is the equivalent of gold?

The men of his era and of the late nineteenth century understood the meaning of such a question and, more importantly, why it is one that must be asked. But theirs was a different world, indeed — one without “QE,” ZIRP,” or “Unknown Knowns” as fiscal policy. And there were no helicopters, either.

Marcia Christoff-Kurapovna is at work on the biography of a prominent European head of state and businessman.  Her work has appeared in such publications as The Wall Street Journal, The Economist and Foreign Affairs.

This article was originally published by the Ludwig von Mises Institute. Permission to reprint in whole or in part is hereby granted, provided full credit is given.