Browsed by
Tag: Internet

The Rational Argumentator’s Sixteenth Anniversary Manifesto

The Rational Argumentator’s Sixteenth Anniversary Manifesto

The New Renaissance Hat
G. Stolyarov II
September 2, 2018
******************************

On August 31, 2018, The Rational Argumentator completed its sixteenth year of publication. TRA is older than Facebook, YouTube, Twitter, and Reddit; it has outlasted Yahoo! Geocities, Associated Content, Helium, and most smaller online publications in philosophy, politics, and current events. Furthermore, the age of TRA now exceeds half of my lifetime to date. During this time, while the Internet and the external world shifted dramatically many times over, The Rational Argumentator strived to remain a bulwark of consistency – accepting growth in terms of improvement of infrastructure and accumulation of content, but not the tumultuous sweeping away of the old to ostensibly make room for the new. We do not look favorably upon tumultuous upheaval; the future may look radically different from the past and present, but ideally should be built in continuity with both, and with preservation of any beneficial aspects that can possibly be preserved.

The Rational Argumentator has experienced unprecedented visitation during its sixteenth year, receiving 1,501,473 total page views as compared to 1,087,149 total page views during its fifteenth year and 1,430,226 during its twelfth year, which had the highest visitation totals until now. Cumulative lifetime TRA visitation has reached 12,481,258 views. Even as TRA’s publication rate has slowed to 61 features during its sixteenth year – due to various time commitments, such as the work of the United States Transhumanist Party (which published 147 features on its website during the same timeframe) – the content of this magazine has drawn increasing interest. Readers, viewers, and listeners are gravitating toward both old and new features, as TRA generally aims to publish works of timeless relevance. The vaster our archive of content, the greater variety of works and perspectives it spans, the more issues it engages with and reflects upon – the more robust and diverse our audience becomes; the more insulated we become against the vicissitudes of the times and the fickle fluctuations of public sentiment and social-media fads.

None of the above is intended to deny or minimize the challenges faced by those seeking to articulate rational, nuanced, and sophisticated ideas on the contemporary Internet. Highly concerning changes to the consumption and availability of information have occurred over the course of this decade, including the following trends.

  • While social media have been beneficial in terms of rendering personal communication at a distance more viable, the fragmentation of social media and the movement away from the broader “open Internet” have seemingly accelerated. Instead of directly navigating and returning to websites of interest, most people now access content almost exclusively through social-media feeds. Even popular and appealing content may often become constrained within the walls of a particular social network or sub-group thereof, simply due to the “black-box” algorithms of that social network, which influence without explanation who sees what and when, and which may not be reflective of what those individuals would have preferred to see. The constantly changing nature of these algorithms renders it difficult for content creators to maintain steady connections with their audiences. If one adds to the mix the increasing and highly troubling tendency of social networks to actively police the content their members see, we may be returning to a situation where most people find their content inexplicably curated by “gatekeepers” who, in the name of objectivity and often with unconscious biases in play, often end up advancing ulterior agendas not in the users’ interests.
  • While the democratization of access to knowledge and information on the Internet has undoubtedly had numerous beneficial effects, we are also all faced with the problem of “information overload” and the need to prioritize essential bits information within an immense sea which we observe daily, hourly, and by the minute. The major drawback of this situation – in which everyone sees everything in a single feed, often curated by the aforementioned inexplicable algorithms – is the difficulty of even locating information that is more than a day old, as it typically becomes buried far down within the social-media feed. Potential counters exist to this tendency – namely, through the existence of old-fashioned, static websites which publish content that does not adjust and that is fixed to a particular URL, which could be bookmarked and visited time and again. But what proportion of the population has learned this technique of bookmarking and revisitation of older content – instead of simply focusing on the social-media feed of the moment? It is imperative to resist the short-termist tendencies that the design of contemporary social media seems to encourage, as indulging these tendencies has had deleterious impacts on attention spans in an entire epoch of human culture.
  • Undeniably, much interesting and creative content has proliferated on the Internet, with opportunities for both deliberate and serendipitous learning, discovery, and intellectual enrichment. Unfortunately, the emergence of such content has coincided with deleterious shifts in cultural norms away from the expectation of concerted, sequential focus (the only way that human minds can actually achieve at a high level) and toward incessant multi-tasking and the expectation of instantaneous response to any external stimulus, human or automated. The practice of dedicating a block of time to read an article, watch a video, or listen to an audio recording – once a commonplace behavior – has come to be a luxury for those who can wrest segments of time and space away from the whirlwind of external stimuli and impositions within which humans (irrespective of material resources or social position) are increasingly expected to spin. It is fine to engage with others and venture into digital common spaces occasionally or even frequently, but in order for such interactions to be productive, one has to have meaningful content to offer; the creation of such content necessarily requires time away from the commons and a reclamation of the concept of private, solitary focus to read, contemplate, apply, and create.
  • In an environment where the immediate, recent, and short-term-oriented content tends to attract the most attention, this amplifies the impulsive, range-of-the-moment, reactive emotional tendencies of individuals, rather than the thoughtful, long-term-oriented, constructive, rational tendencies. Accordingly, political and cultural discourse become reduced to bitter one-liners that exacerbate polarization, intentional misunderstanding of others, and toxicity of rhetoric. The social networks where this has been most salient have been those that limit the number of characters per post and prioritize quantity of posts over quality and the instantaneity of a response over its thoughtfulness. The infrastructures whose design presupposes that everyone’s expressions are of equal value have produced a reduction of discourse to the lowest common denominator, which is, indeed, quite low. Even major news outlets, where some quality selection is still practiced by the editors, have found that user comments often degenerate into a toxic morass. This is not intended to deny the value of user comments and interaction, in a properly civil and constructive context; nor is it intended to advocate any manner of censorship. Rather, this observation emphatically underscores the need for a return to long-form, static articles and longer written exchanges more generally as the desirable prevailing form of intellectual discourse. (More technologically intensive parallels to this long-form discourse would include long-form audio podcasts or video discussion panels where there is a single stream of conversation or narrative instead of a flurry of competing distractions.) Yes, this form of discourse takes more time and skill. Yes, this means that people have to form complex, coherent thoughts and express them in coherent, grammatically correct sentences. Yes, this means that fewer people will have the ability or inclination participate in that form of discourse. And yes, that may well be the point – because less of the toxicity will make its way completely through the structures which define long-form discourse – and because anyone who can competently learn the norms of long-form discourse, as they have existed throughout the centuries, will remain welcome to take part. Those who are not able or willing to participate can still benefit by spectating and, in the process, learning and developing their own skills.

The Internet was intended, by its early adopters and adherents of open Internet culture – including myself – to catalyze a new Age of Enlightenment through the free availability of information that would break down old prejudices and enable massively expanded awareness of reality and possibilities for improvement. Such a possibility remains, but humans thus far have fallen massively short of realizing it – because the will must be present to utilize constructively the abundance of available resources. Cultivating this will is no easy task; The Rational Argumentator has been pursuing it for sixteen years and will continue to do so. The effects are often subtle, indirect, long-term – more akin to the gradual drift of continents than the upward ascent of a rocket. And yet progress in technology, science, and medicine continues to occur. New art continues to be created; new treatises continue to be written. Some people do learn, and some people’s thinking does improve. There is no alternative except to continue to act in pursuit of a brighter future, and in the hope that others will pursue it as well – that, cumulatively, our efforts will be sufficient to avert the direst crises, make life incrementally safer, healthier, longer, and more comfortable, and, as a civilization, persist beyond the recent troubled times. The Rational Argumentator is a bulwark against the chaos – hopefully one among many – and hopefully many are at work constructing more bulwarks. Within the bulwarks, great creations may have room to develop and flourish – waiting for the right time, once the chaos subsides or is pacified by Reason, to emerge and beautify the world. In the meantime, enjoy all that can be found within our small bulwark, and visit it frequently to help it expand.

Gennady Stolyarov II,
Editor-in-Chief, The Rational Argumentator

This essay may be freely reproduced using the Creative Commons Attribution Share-Alike International 4.0 License, which requires that credit be given to the author, G. Stolyarov II. Find out about Mr. Stolyarov here.

How To Survive a World of Instant Feedback – Article by Jeffrey A. Tucker

How To Survive a World of Instant Feedback – Article by Jeffrey A. Tucker

Jeffrey A. Tucker
******************************

I first started writing before the Internet existed. We all wrote for an audience we mostly had to imagine in our minds.

The only way to give an author feedback was to write a letter, put it in an envelope with an approved stamp, and give it to a government employee who would slog across the land and then drop it at the writer’s physical locale a week after he or she wrote the initial piece. People did it but not that often.

Yes, I know there are people reading this who find this hilarious and embarrassing. It seems as long ago as the War of the Roses. Actually it was that long ago. But the distance between then and now seems like eons. That how much and how quickly we’ve advanced.

The dark ages: everything before 1995.

Because no one really knew what readers were thinking – actually hardly anyone knew anything about anything in retrospect – you had to assume some rule of thumb about any feedback you were lucky enough to get. I assumed that one letter equalled the views of one thousand readers. Two letters saying that same thing represented five thousand readers. Three letters with the same opinion suggested near unanimity: this is the view of every reader.

Now We Know Everything

Times have dramatically changed. I could right now post a thought and get hundreds of reactions within a few minutes. There’s no shortage of input, that’s for sure. There’s email of course, but also comment boxes, forums, social media posts, and lightning-fast Twitter interactions.

Twitter is often called a cesspool of toxicity. This is mostly untrue. It’s just that the toxic parts stand out in our minds because they have a bigger impact on our psyches.

This is how it is with all feedback. I once knew a world-famous soprano who received her fans following concerts. One hundred fifty people would tell her she was fabulous and amazing. One person would say: “You were fine but it wasn’t your best night.”

Guess which comment she remembered?

So too on Twitter. Not all commentary is thoughtful. In fact, no matter what I post, unless it is completely innocuous, I’m very likely to face a flurry of outraged opinions, some of which is laced with profanity and some of which trends toward the deeply disturbing. These are the reactions we tend to remember. They rattle, shock, and alarm us. They give the impression that humanity is a teeming mass of angry, unthoughtful, and even cruel people.

It’s mostly an illusion. But it takes some experience to figure out why.

Everyone Hates You

We live in a highly partisan world generally divided between right and left, and each side is ready to pounce on anyone it perceives to be an enemy.

One day this week, I was simultaneously hammered by the left and right, and it made an interesting study in contrast.

The Twitter Left

I had written a defense of “child labor,” which is to say I wrote against laws that forbid tweens from getting a paying job as a supplement to education they are otherwise forced by government to endure. This would be a wonderful opportunity for them, and give them an awesome preparation for life. The law forbade this back in the 1930s. Today, kids are basically banned from working or face such hurdles as to make it not worth it. They can’t really be fully employed until the age of 18.

To me all of this is rather obvious, and I don’t get why I seem to be one of the only people on this beat. Regardless, the article took off and received 100,000-plus views. Some of the readers were dedicated leftists, who regard the legal abolition of “child labor” to be one of the great signs of progress in the world.

The flurry of loathing began. I was called out for being a bad person, a cruel person, a man with a heart of stone, a complete jerk who lacks a shred of human decency. In each case, I would reply asking my accuser to explain why he or she is saying this. They would respond with shock: “for God’s sake, man, you are defending child labor!”

Again, that only raises the question. One person said that I dreamt of throwing kids back in the salt mines. I don’t even know what that means. Is there a salt mine around here that is looking for 12-year old inexperienced kids to exploit? Actually, I’m thinking more of kids working at Chick-Fil-A or Walmart or a lawn company.

Anyway, this seems to be a left-wing penchant. Anyone who disagrees with their policies is a bad person. End of story.

The Twitter Right

Then you have the far-right, the sector of Internet life that has most mastered the art of trolling. Users in this camp don’t tend to use their real names. They create dozens of sock-puppet accounts. They send blast after blast designed to make the recipient feel as if he or she is being bombarded.

The same day as my child labor piece came out, I tweeted that I had doubts about the theory that Seth Rich was shot for leaking DNC emails. I raised the problem that there is a lack of evidence to support the theory. If you know about this conspiracy theory, there are hundreds of thousands of people who believe, thanks mostly to Sean Hannity, that there is a huge coverup going on, and that someone in the Hillary Clinton camp is guilty of outright murder.

I have no special intelligence on the topic. I was only asking what I thought were intelligent questions.

Then came the bombardment. I was accused of being a toady of the Democrats. A dupe. A snowflake. An apologist for Clinton. A cuck. A member of the mainstream media. In the pay of the deep state. And so on. Then the memes started. Here is where things get wicked. They use your face and plant it in cartoons, being thrown out of helicopters, being burned alive in gas chambers, and so on.

What you discover from Twitter is that when you are trolled by the right, you are only one degree separated from real Nazis. Of course they say that they are not really Nazis. They are only ironic Nazis, people using free speech to annoy the left with extremist rhetoric that is not authentic but only play acting.

As if ideas don’t matter. Of course they matter! No one wants to wake up in the morning to 150 notifications from Nazis. That will indeed take your breath away and get your heart pumping. It is supposed to. That is precisely what it is intended to do. If you then go public and write a bleating post about the rise of Nazism in America, they all cheer because that is what they hope for.

How To Deal With It

Dealing with instant feedback from anyone in the world is something new. It is no longer the case that three interactions with the same opinion represent multitudes. It could mean only three people. Even 300 interactions means only 300 interactions. There are 328 million people on Twitter.

Keep that in mind.

Other strategies I use include retweeting insults (this very much confuses your tormentors), calm and rational argumentation, and of course blocking. I feel like I block constantly. It’s not actually true: last I checked, I’ve blocked 140 people and have 26,000 followers. That’s not a huge army of trolls. That’s really a minor annoyance, even if it feels otherwise.

Most of all, I would suggest feeling nothing but gratitude for the spread of information technology. People complain constantly about fake news, internet trolls, hate armies, and so on. But you know what’s worse? Living in the dark ages. No one wants to go back.

Jeffrey Tucker is Director of Content for the Foundation for Economic Education. He is also Chief Liberty Officer and founder of Liberty.me, Distinguished Honorary Member of Mises Brazil, research fellow at the Acton Institute, policy adviser of the Heartland Institute, founder of the CryptoCurrency Conference, member of the editorial board of the Molinari Review, an advisor to the blockchain application builder Factom, and author of five books. He has written 150 introductions to books and many thousands of articles appearing in the scholarly and popular press.

This article was published by The Foundation for Economic Education and may be freely distributed, subject to a Creative Commons Attribution 4.0 International License, which requires that credit be given to the author. Read the original article.

Are We Fighting Terrorism, Or Creating More Terrorism? – Article by Ron Paul

Are We Fighting Terrorism, Or Creating More Terrorism? – Article by Ron Paul

Ron Paul
******************************

When we think about terrorism, we most often think about the horrors of a Manchester-like attack, where a radicalized suicide bomber went into a concert hall and killed dozens of innocent civilians. It was an inexcusable act of savagery, and it certainly did terrorize the population.

What is less considered are attacks that leave far more civilians dead, happen nearly daily instead of rarely, and produce a constant feeling of terror and dread. The victims are the civilians on the receiving end of US and allied bombs in places like Syria, Yemen, Afghanistan, Somalia, and elsewhere.

Last week alone, US and “coalition” attacks on Syria left more than 200 civilians dead and many hundreds more injured. In fact, even though US intervention in Syria was supposed to protect the population from government attacks, US-led air strikes have killed more civilians over the past month than air strikes of the Assad government. That is like a doctor killing his patient to save him.

Do we really believe we are fighting terrorism by terrorizing innocent civilians overseas? How long until we accept that “collateral damage” is just another word for “murder”?

The one so-called success of the recent G7 summit in Sicily was a general agreement to join together to “fight terrorism.” Have we not been in a “war on terrorism” for the past 16 years? What this really means is more surveillance of innocent civilians, a crackdown on free speech and the Internet, and many more bombs dropped overseas. Will doing more of what we have been doing do the trick? Hardly! After 16 years fighting terrorism, it is even worse than before we started. This can hardly be considered success.

They claim that more government surveillance will keep us safe. But the UK is already the most intrusive surveillance state in the western world. The Manchester bomber was surely on the radar screen. According to press reports, he was known to the British intelligence services, he had traveled and possibly trained in bomb-making in Libya and Syria, his family members warned the authorities that he was dangerous, and he even flew terrorist flags over his house. What more did he need to do to signal that he may be a problem? Yet somehow even in Orwellian UK, the authorities missed all the clues.

But it is even worse than that. The British government actually granted permission for its citizens of Libyan background to travel to Libya and fight alongside al-Qaeda to overthrow Gaddafi. After months of battle and indoctrination, it then welcomed these radicalized citizens back to the UK. And we are supposed to be surprised and shocked that they attack?

The real problem is that both Washington and London are more interested in regime change overseas than any blowback that might come to the rest of us back home. They just do not care about the price we pay for their foreign-policy actions. No grand announcement of new resolve to “fight terrorism” can be successful unless we understand what really causes terrorism. They do not hate us because we are rich and free. They hate us because we are over there, bombing them.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission from the Ron Paul Institute for Peace and Prosperity.

U.S. Transhumanist Party Support for H.R. 1868, the Restoring American Privacy Act of 2017 – Post by G. Stolyarov II

U.S. Transhumanist Party Support for H.R. 1868, the Restoring American Privacy Act of 2017 – Post by G. Stolyarov II

The New Renaissance HatG. Stolyarov II
******************************

The United States Transhumanist Party and Nevada Transhumanist Party support H.R. 1868, the Restoring American Privacy Act of 2017, proposed by Rep. Jacky Rosen of Henderson, Nevada.

This bill, if enacted into law, would undo the power recently granted by S.J. Res. 34 for regional-monopoly Internet Service Providers (ISPs) to sell individuals’ private data – including browsing histories – without those individuals’ consent. For more details, read Caleb Chen’s article on Privacy News Online, “Congresswoman Rosen introduces Restoring American Privacy Act of 2017 to reverse S.J. Res. 34”.

Section I of the U.S. Transhumanist Party Platform states, “The United States Transhumanist Party strongly supports individual privacy and liberty over how to apply technology to one’s personal life. The United States Transhumanist Party holds that each individual should remain completely sovereign in the choice to disclose or not disclose personal activities, preferences, and beliefs within the public sphere. As such, the United States Transhumanist Party opposes all forms of mass surveillance and any intrusion by governmental or private institutions upon non-coercive activities that an individual has chosen to retain within his, her, or its private sphere. However, the United States Transhumanist Party also recognizes that no individuals should be protected from peaceful criticism of any matters that those individuals have chosen to disclose within the sphere of public knowledge and discourse.”

Neither governmental nor private institutions – especially private institutions with coercive monopoly powers granted to them by laws barring or limiting competition – should be permitted to deprive individuals of the choice over whether or not to disclose their personal information.

Individuals’ ownership over their own data and sovereignty over whether or not to disclose any browsing history or other history of online visitation to external entities are essential components of privacy, and we applaud Representative Rosen for her efforts to restore these concepts within United States federal law.

Become a member of the U.S. Transhumanist Party for free  by filling out the membership application form here

The IRS Believes All Bitcoin Users are Tax Cheats – Article by Jim Harper

The IRS Believes All Bitcoin Users are Tax Cheats – Article by Jim Harper

The New Renaissance HatJim Harper
******************************

The Internal Revenue Service has filed a “John Doe” summons seeking to require U.S. Bitcoin exchange Coinbase to turn over records about every transaction of every user from 2013 to 2015. That demand is shocking in sweep, and it includes: “complete user profile, history of changes to user profile from account inception, complete user preferences, complete user security settings and history (including confirmed devices and account activity), complete user payment methods, and any other information related to the funding sources for the account/wallet/vault, regardless of date.” And every single transaction:

All records of account/wallet/vault activity including transaction logs or other records identifying the date, amount, and type of transaction (purchase/sale/exchange), the post transaction balance, the names or other identifiers of counterparties to the transaction; requests or instructions to send or receive bitcoin; and, where counterparties transact through their own Coinbase accounts/wallets/vaults, all available information identifying the users of such accounts and their contact information.

The demand is not limited to owners of large amounts of Bitcoin or to those who have transacted in large amounts. Everything about everyone.

Equally shocking is the weak foundation for making this demand. In a declaration submitted to the court, an IRS agent recounts having learned of tax evasion on the part of one Bitcoin user and two companies. On this basis, he and the IRS claim “a reasonable basis for believing” that all U.S. Coinbase users “may fail or may have failed to comply” with the internal revenue laws.

If that evidence is enough to create a reasonable basis to believe that all Bitcoin users evade taxes, the IRS is entitled to access the records of everyone who uses paper money.

Anecdotes and online bragodaccio about tax avoidance are not a reasonable basis to believe that all Coinbase users are tax cheats whose financial lives should be opened to IRS investigators and the hackers looking over their shoulders. There must be some specific information about particular users, or else the IRS is seeking a general warrant, which the Fourth Amendment denies it the power to do.

Speaking of the Fourth Amendment, that rock-bottom “reasonable basis” standard is probably insufficient. Americans should and probably do have Fourth Amendment rights in information they entrust to financial services providers required by contract to keep it confidential. Observers of Fourth Amendment law know full-well that the “third-party doctrine,” which cancels Fourth Amendment interests in shared information, is in retreat.

The IRS’s effort to strip away the privacy of all Coinbase users is more broad than the government’s effort in recent cases dealing with cell site location information. In the CSLI cases, the government has sought data about particular suspects, using a standard below the probable cause standard required by the Fourth Amendment (“specific and articulable facts showing that there are reasonable grounds to believe”).

In United States v. Benbow, we argued to the D.C. Circuit that people retain a property right in information they share with service providers under contractual privacy obligations. This information is a “paper or effect” for purposes of the Fourth Amendment. Accordingly, a probable cause standard should apply to accessing that data.

Again, the government in the CSLI cases sought information about the cell phone use of particular suspects, and that is controversial enough given the low standard of the Stored Communications Act. Here, the IRS is seeking data about every user of Bitcoin, using a standard that’s even lower.

Coinbase’s privacy policy only permits it to share user information with law enforcement when it is “compelled to do so.” That implies putting up a reasonable fight for the interests of its users. Given the low standard and the vastly overbroad demand, Coinbase seems obligated to put up that fight.

Jim Harper is a senior fellow at the Cato Institute, working to adapt law and policy to the information age in areas such as privacy, cybersecurity, telecommunications, intellectual property, counterterrorism, government transparency, and digital currency. A former counsel to committees in both the U.S. House and the U.S. Senate, he went on to represent companies such as PayPal, ICO-Teledesic, DigitalGlobe, and Verisign, and in 2014 he served as Global Policy Counsel for the Bitcoin Foundation.

Harper holds a JD from the University of California–Hastings College of Law.

This work by Cato Institute is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Read the original article.

Decentralization: Why Dumb Networks Are Better – Article by Andreas Antonopoulos

Decentralization: Why Dumb Networks Are Better – Article by Andreas Antonopoulos

The New Renaissance Hat
Andreas Antonopoulos
March 8, 2015
******************************

“Every device employed to bolster individual freedom must have as its chief purpose the impairment of the absoluteness of power.” — Eric Hoffer

In computer and communications networks, decentralization leads to faster innovation, greater openness, and lower cost. Decentralization creates the conditions for competition and diversity in the services the network provides.

But how can you tell if a network is decentralized, and what makes it more likely to be decentralized? Network “intelligence” is the characteristic that differentiates centralized from decentralized networks — but in a way that is surprising and counterintuitive.

Some networks are “smart.” They offer sophisticated services that can be delivered to very simple end-user devices on the “edge” of the network. Other networks are “dumb” — they offer only a very basic service and require that the end-user devices are intelligent. What’s smart about dumb networks is that they push innovation to the edge, giving end-users control over the pace and direction of innovation. Simplicity at the center allows for complexity at the edge, which fosters the vast decentralization of services.

Surprisingly, then, “dumb” networks are the smart choice for innovation and freedom.

The telephone network used to be a smart network supporting dumb devices (telephones). All the intelligence in the telephone network and all the services were contained in the phone company’s switching buildings. The telephone on the consumer’s kitchen table was little more than a speaker and a microphone. Even the most advanced touch-tone telephones were still pretty simple devices, depending entirely on the network services they could “request” through beeping the right tones.

In a smart network like that, there is no room for innovation at the edge. Sure, you can make a phone look like a cheeseburger or a banana, but you can’t change the services it offers. The services depend entirely on the central switches owned by the phone company. Centralized innovation means slow innovation. It also means innovation directed by the goals of a single company. As a result, anything that doesn’t seem to fit the vision of the company that owns the network is rejected or even actively fought.

In fact, until 1968, AT&T restricted the devices allowed on the network to a handful of approved devices. In 1968, in a landmark decision, the FCC ruled in favor of the Carterfone, an acoustic coupler device for connecting two-way radios to telephones, opening the door for any consumer device that didn’t “cause harm to the system.”

That ruling paved the way for the answering machine, the fax machine, and the modem. But even with the ability to connect smarter devices to the edge, it wasn’t until the modem that innovation really accelerated. The modem represented a complete inversion of the architecture: all the intelligence was moved to the edge, and the phone network was used only as an underlying “dumb” network to carry the data.

Did the telecommunications companies welcome this development? Of course not! They fought it for nearly a decade, using regulation, lobbying, and legal threats against the new competition. In some countries, modem calls across international lines were automatically disconnected to prevent competition in the lucrative long-distance market. In the end, the Internet won. Now, almost the entire phone network runs as an app on top of the Internet.

The Internet is a dumb network, which is its defining and most valuable feature. The Internet’s protocol (transmission control protocol/Internet protocol, or TCP/IP) doesn’t offer “services.” It doesn’t make decisions about content. It doesn’t distinguish between photos and text, video and audio. It doesn’t have a list of approved applications. It doesn’t even distinguish between client and server, user and host, or individual versus corporation. Every IP address is an equal peer.

TCP/IP acts as an efficient pipeline, moving data from one point to another. Over time, it has had some minor adjustments to offer some differentiated “quality of service” capabilities, but other than that, it remains, for the most part, a dumb data pipeline. Almost all the intelligence is on the edge — all the services, all the applications are created on the edge-devices. Creating a new application does not involve changing the network. The Web, voice, video, and social media were all created as applications on the edge without any need to modify the Internet protocol.

So the dumb network becomes a platform for independent innovation, without permission, at the edge. The result is an incredible range of innovations, carried out at an even more incredible pace. People interested in even the tiniest of niche applications can create them on the edge. Applications that only have two participants only need two devices to support them, and they can run on the Internet. Contrast that to the telephone network where a new “service,” like caller ID, had to be built and deployed on every company switch, incurring maintenance cost for every subscriber. So only the most popular, profitable, and widely used services got deployed.

The financial services industry is built on top of many highly specialized and service-specific networks. Most of these are layered atop the Internet, but they are architected as closed, centralized, and “smart” networks with limited intelligence on the edge.

Take, for example, the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the international wire transfer network. The consortium behind SWIFT has built a closed network of member banks that offers specific services: secure messages, mostly payment orders. Only banks can be members, and the network services are highly centralized.

The SWIFT network is just one of dozens of single-purpose, tightly controlled, and closed networks offered to financial services companies such as banks, brokerage firms, and exchanges. All these networks mediate the services by interposing the service provider between the “users,” and they allow minimal innovation or differentiation at the edge — that is, they are smart networks serving mostly dumb devices.

Bitcoin is the Internet of money. It offers a basic dumb network that connects peers from anywhere in the world. The bitcoin network itself does not define any financial services or applications. It doesn’t require membership registration or identification. It doesn’t control the types of devices or applications that can live on its edge. Bitcoin offers one service: securely time-stamped scripted transactions. Everything else is built on the edge-devices as an application. Bitcoin allows any application to be developed independently, without permission, on the edge of the network. A developer can create a new application using the transactional service as a platform and deploy it on any device. Even niche applications with few users — applications never envisioned by the bitcoin protocol creator — can be built and deployed.

Almost any network architecture can be inverted. You can build a closed network on top of an open network or vice versa, although it is easier to centralize than to decentralize. The modem inverted the phone network, giving us the Internet. The banks have built closed network systems on top of the decentralized Internet. Now bitcoin provides an open network platform for financial services on top of the open and decentralized Internet. The financial services built on top of bitcoin are themselves open because they are not “services” delivered by the network; they are “apps” running on top of the network. This arrangement opens a market for applications, putting the end user in a position of power to choose the right application without restrictions.

What happens when an industry transitions from using one or more “smart” and centralized networks to using a common, decentralized, open, and dumb network? A tsunami of innovation that was pent up for decades is suddenly released. All the applications that could never get permission in the closed network can now be developed and deployed without permission. At first, this change involves reinventing the previously centralized services with new and open decentralized alternatives. We saw that with the Internet, as traditional telecommunications services were reinvented with email, instant messaging, and video calls.

This first wave is also characterized by disintermediation — the removal of entire layers of intermediaries who are no longer necessary. With the Internet, this meant replacing brokers, classified ads publishers, real estate agents, car salespeople, and many others with search engines and online direct markets. In the financial industry, bitcoin will create a similar wave of disintermediation by making clearinghouses, exchanges, and wire transfer services obsolete. The big difference is that some of these disintermediated layers are multibillion dollar industries that are no longer needed.

Beyond the first wave of innovation, which simply replaces existing services, is another wave that begins to build the applications that were impossible with the previous centralized network. The second wave doesn’t just create applications that compare to existing services; it spawns new industries on the basis of applications that were previously too expensive or too difficult to scale. By eliminating friction in payments, bitcoin doesn’t just make better payments; it introduces market mechanisms and price discovery to economic activities that were too small or inefficient under the previous cost structure.

We used to think “smart” networks would deliver the most value, but making the network “dumb” enabled a massive wave of innovation. Intelligence at the edge brings choice, freedom, and experimentation without permission. In networks, “dumb” is better.

Andreas M. Antonopoulos is a technologist and serial entrepreneur who advises companies on the use of technology and decentralized digital currencies such as bitcoin.

This article was originally published by The Foundation for Economic Education.

Math Education Should Be Set Free – Article by Bradley Doucet

Math Education Should Be Set Free – Article by Bradley Doucet

The New Renaissance Hat
Bradley Doucet
February 12, 2015
******************************
At different times in my life, I have earned my living tutoring high school math, helping struggling students struggle a little less with quadratic equations and trigonometric functions. I always excelled at math when I was in high school, and my temperament is well-suited to being patient with kids who are not understanding, and to figuring out why they’re not understanding. The experience of assisting a couple of hundred different students over the years has convinced me that just about anyone can learn to understand high school math. Some people simply need more time than others to become proficient with numbers and graphs and such.
 ***
Given my background, I read with interest The Globe and Mail’s write-up in January 2014 on what they are calling the Math Wars, “a battle that’s been brewing for years but heated up last month when this country dropped out of the top 10 in international math education standings.” Specifically, since the year 2000, Canada has fallen from 6th to 13th in the OECD’s Programme for International Student Assessment (PISA). Robert Craigen, a University of Manitoba mathematics professor, points out that this slippage coincides with the move away from teaching basic math skills and the adoption of discovery learning. In much of Canada today, this latest fad has children learning (or failing to learn) math by “investigating ideas through problem-solving, pattern discovery and open-ended exploration.”
 ***

Interestingly, when the Canadian provinces are included in the PISA rankings, Quebec is first among them, places 8th overall, and has lost practically no ground over the last dozen years. Why is Quebec suddenly ahead of the pack? Another Globe and Mail article from last month says that little work has been done on this question, but that “researchers have started focusing on Quebec’s intensive teacher training and curriculum, which balances traditional math drills with problem-solving approaches.” Basic math skills and problem solving sounds like a winning combination to me—and I bet the extra teacher training doesn’t hurt either.

Personally, I have long thought that math students should be allowed to progress at different rates. Currently, the brightest students shine out by scoring 90s and 100s while weaker students flounder with 60s and 70s and are forced to move on to more complex topics without having mastered more basic ones, almost ensuring their continued difficulties. With student-paced learning, the brightest students could still shine out by progressing more quickly, but weaker students would be given the time they need to master each topic before tackling harder problems. Everyone would get 90s and 100s; some would just get them sooner. Teaching would have to change, of course, in such a system. Maybe students would end up watching pre-recorded lessons, a la Khan Academy, and teachers could become more like flexible aides in the classroom, in addition to monitoring individual students to make sure they aren’t slacking off.

The Globe and Mail ended its editorial on Canada’s math woes last Thursday with a call to action: “If our students’ success in math really matters—and it does—it’s time to a have national policy discussion on how to move forward. Everything should be on the table, including curriculum reform. Let’s think big.” I can’t think of a worse idea. Even if you put me in charge of developing this national policy, it would still be a bad idea. After all, who’s to say if I’m correct in supposing that learning at your own pace is the way to go, that it would help everyone succeed and take away some of the anxiety many feel about math? Maybe it would be good for some, and less good for others. Maybe some people need the thrill of competing for top marks, while others would thrive in a less overtly competitive environment. Maybe people are different.

It’s bad enough that governments fix policies for entire provinces; the last thing we need is for everyone in the entire country to be doing the same thing. To the extent that there is a better way (or that there are better ways) to teach math, ways that we may not have even tried yet, the best means of discovering them is to allow different schools to teach math differently, to vary curriculum and teaching style and class size and whatever else they think might help. Let them compete for students, and let the best approaches win, and the worst approaches fall by the wayside, instead of having everyone follow the latest fad and doing irreparable damage to an entire cohort of kids.

It’s very hard to imagine this happening, though, in a system that is financed through taxation. Even though it’s ultimately the same people paying, whether directly as consumers or indirectly as taxpayers, people get into the mental habit of thinking that the government is paying, as if the government had a source of income other than the incomes of its people. And if the government is paying, then the government has to make sure it’s getting its money’s worth, and it’s only natural then that the government (i.e., politicians and bureaucrats) should set the curriculum and educational approach and make sure everyone is progressing at the same pace, in flagrant disregard of human diversity. It seems that we have a choice between “free” education and setting education free. Politicians and bureaucrats won’t give up control without a fight, though, which is a shame in the short term. But it may not matter in the longer term, as private initiatives like the Khan Academy make government schooling increasingly irrelevant.

I love math, and I furthermore believe that it is important for people to learn math. Mastery of math does indeed matter, which is precisely why we should think small and avoid the siren song of a “national policy discussion on how to move forward” on the educational front. Instead, we should let a thousand flowers bloom, and work with, not against, the natural diversity of humankind.

Bradley Doucet is Le Québécois Libre‘s English Editor and the author of the blog Spark This: Musings on Reason, Liberty, and Joy. A writer living in Montreal, he has studied philosophy and economics, and is currently completing a novel on the pursuit of happiness.
The Internet Memory Hole – Article by Wendy McElroy

The Internet Memory Hole – Article by Wendy McElroy

The New Renaissance Hat
Wendy McElroy
November 24, 2014
******************************

Imagine you are considering a candidate as a caregiver for your child. Or maybe you are vetting an applicant for a sensitive position in your company. Perhaps you’re researching a public figure for class or endorsing him in some manner. Whatever the situation, you open your browser and assess the linked information that pops up from a search. Nothing criminal or otherwise objectionable is present, so you proceed with confidence. But what if the information required for you to make a reasoned assessment had been removed by the individual himself?

Under “the right to be forgotten,” a new “human right” established in the European Union in 2012, people can legally require a search engine to delete links to their names, even if information at the linked source is true and involves a public matter such as an arrest. The Google form for requesting removal asks the legally relevant question of why the link is “irrelevant, outdated, or otherwise objectionable.” Then it is up to the search engine to determine whether to delete the link.

The law’s purpose is to prevent people from being stigmatized for life. The effect, however, is to limit freedom of the press, freedom of speech, and access to information. Each person becomes a potential censor who can rewrite history for personal advantage.

It couldn’t happen here

The process of creating such a law in the United States is already underway. American law is increasingly driven by public opinion and polls. The IT security company Software Advice recently conducted a survey that found that “sixty-one percent of Americans believe some version of the right to be forgotten is necessary,” and “thirty-nine percent want a European-style blanket right to be forgotten, without restrictions.” And politicians love to give voters what they want.

In January 2015, California will enforce the Privacy Rights for California Minors in the Digital World law. This is the first state version of a “right to be forgotten” law. It requires “the operator of an Internet Web site, online service, online application, or mobile application to permit a minor, who is a registered user … to remove, or to request and obtain removal of, content or information posted … by the minor.” (There are some exceptions.)

Meanwhile, the consumer-rights group Consumer Watchdog has floated the idea that Google should voluntarily provide Americans with the right to be forgotten. On September 30, 2014, Forbes stated, “The fight for the right to be forgotten is certainly coming to the U.S., and sooner than you may think.” For one thing, there is a continuing hue and cry about embarrassing photos of minors and celebrities being circulated.

Who and what deserves to be forgotten?

What form would the laws likely take? In the Stanford Law Review (February 13, 2012), legal commentator Jeffrey Rosen presented three categories of information that would be vulnerable if the EU rules became a model. First, material posted could be “unlinked” at the poster’s request. Second, material copied by another site could “almost certainly” be unlinked at the original poster’s request unless its retention was deemed “necessary” to “the right of freedom of expression.” Rosen explained, “Essentially, this puts the burden on” the publisher to prove that the link “is a legitimate journalistic (or literary or artistic) exercise.” Third, the commentary of one individual about another, whether truthful or not, could be vulnerable. Rosen observed that the EU includes “takedown requests for truthful information posted by others.… I can demand takedown and the burden, once again, is on the third party to prove that it falls within the exception for journalistic, artistic, or literary exception.”

Search engines have an incentive to honor requests rather than to absorb the legal cost of fighting them. Rosen said, “The right to be forgotten could make Facebook and Google, for example, liable for up to two percent of their global income if they fail to remove photos that people post about themselves and later regret, even if the photos have been widely distributed already.” An October 12, 2014, article in the UK Daily Mail indicated the impact of compliance on the free flow of public information. The headline: “Google deletes 18,000 UK links under ‘right to be forgotten’ laws in just a month: 60% of Europe-wide requests come from fraudsters, criminals and sex offenders.”

American backlash

America protects the freedoms of speech and the press more vigorously than Europe does. Even California’s limited version of a “right to be forgotten” bill has elicited sharp criticism from civil libertarians and tech-freedom advocates. The IT site TechCrunch expressed the main practical objection: “The web is chaotic, viral, and interconnected. Either the law is completely toothless, or it sets in motion a very scary anti-information snowball.” TechCrunch also expressed the main political objection: The bill “appears to create a head-on collision between privacy law and the First Amendment.”

Conflict between untrue information and free speech need not occur. Peter Fleischer, Google’s global privacy counsel, explained, “Traditional law has mechanisms, like defamation and libel law, to allow a person to seek redress against someone who publishes untrue information about him.… The legal standards are long-standing and fairly clear.” Defamation and libel are controversial issues within the libertarian community, but the point here is that defense against untrue information already exists.

What of true information? Truth is a defense against being charged with defamation or libel. America tends to value freedom of expression above privacy rights. It is no coincidence that the First Amendment is first among the rights protected by the Constitution. And any “right” to delete the truth from the public sphere runs counter to the American tradition of an open public square where information is debated and weighed.

Moreover, even true information can have powerful privacy protection. For example, the Fourth Amendment prohibits the use of data that is collected via unwarranted search and seizure. The Fourteenth Amendment is deemed by the Supreme Court to offer a general protection to family information. And then there are the “protections” of patents, trade secrets, copyrighted literature, and a wide range of products that originate in the mind. Intellectual property is controversial, too. But again, the point here is that defenses already exist.

Reputation capital

Reputation capital consists of the good or bad opinions that a community holds of an individual over time. It is not always accurate, but it is what people think. The opinion is often based on past behaviors, which are sometimes viewed as an indicator of future behavior. In business endeavors, reputation capital is so valuable that aspiring employees will work for free as interns in order to accrue experience and recommendations. Businesses will take a loss to replace an item or to otherwise credit a customer in order to establish a name for fairness. Reputation is thus a path to being hired and to attracting more business. It is a nonfinancial reward for establishing the reliability and good character upon which financial remuneration often rests.

Conversely, if an employee’s bad acts are publicized, then a red flag goes up for future employers who might consider his application. If a company defrauds customers, community gossip could drive it out of business. In the case of negative reputation capital, the person or business who considers dealing with the “reputation deficient” individual is the one who benefits by realizing a risk is involved. Services, such as eBay, often build this benefit into their structure by having buyers or sellers rate individuals. By one estimate, a 1 percent negative rating can reduce the price of an eBay good by 4 percent. This system establishes a strong incentive to build positive reputation capital.

Reputation capital is particularly important because it is one of the key answers to the question, “Without government interference, how do you ensure the quality of goods and services?” In a highly competitive marketplace, reputation becomes a path to success or to failure.

Right-to-be-forgotten laws offer a second chance to an individual who has made a mistake. This is a humane option that many people may choose to extend, especially if the individual will work for less money or offer some other advantage in order to win back his reputation capital. But the association should be a choice. The humane nature of a second chance should not overwhelm the need of others for public information to assess the risks involved in dealing with someone. Indeed, this risk assessment provides the very basis of the burgeoning sharing economy.

History and culture are memory

In “The Right to Be Forgotten: An Insult to Latin American History,” Eduardo Bertoni offers a potent argument. He writes that the law’s “name itself“ is “an affront to Latin America; rather than promoting this type of erasure, we have spent the past few decades in search of the truth regarding what occurred during the dark years of the military dictatorships.” History is little more than preserved memory. Arguably, culture itself lives or dies depending on what is remembered and shared.

And yet, because the right to be forgotten has the politically seductive ring of fairness, it is becoming a popular view. Fleischer called privacy “the new black in censorship fashion.” And it may be increasingly on display in America.

Wendy McElroy (wendy@wendymcelroy.com) is an author, editor of ifeminists.com, and Research Fellow at The Independent Institute (independent.org).

This article was originally published by The Foundation for Economic Education.

Internet Gambling Ban: A Winner for Sheldon Adelson, A Losing Bet for the Rest of Us – Article by Ron Paul

Internet Gambling Ban: A Winner for Sheldon Adelson, A Losing Bet for the Rest of Us – Article by Ron Paul

The New Renaissance Hat
Ron Paul
November 16, 2014
******************************
Most Americans, regardless of ideology, oppose “crony capitalism” or “cronyism.” Cronyism is where politicians write laws aimed at helping their favored business beneficiaries. Despite public opposition to cronyism, politicians still seek to use the legislative process to help special interests.For example, Congress may soon vote on legislation outlawing Internet gambling. It is an open secret, at least inside the Beltway, that this legislation is being considered as a favor to billionaire casino owner, Sheldon Adelson. Mr. Adelson, who is perhaps best known for using his enormous wealth to advance a pro-war foreign policy, is now using his political influence to turn his online competitors into criminals.Supporters of an Internet gambling ban publicly deny they are motivated by a desire to curry favor with a wealthy donor. Instead, they give a number of high-minded reasons for wanting to ban this activity. Some claim that legalizing online gambling will enrich criminals and even terrorists! But criminalizing online casinos will not eliminate the demand for online casinos. Instead, passage of this legislation will likely guarantee that the online gambling market is controlled by criminals. Thus, it is those who support outlawing online gambling who may be aiding criminals and terrorists.

A federal online gambling ban would overturn laws in three states that allow online gambling. It would also end the ongoing debate over legalizing online gambling in many other states. Yet some have claimed that Congress must pass this law in order to protect states rights! Their argument is that citizens of states that ban Internet gambling may easily get around those laws by accessing online casinos operating in states where online gambling is legalized.

Even if the argument had merit that allowing states to legalize online gambling undermines laws in other states, it would not justify federal legislation on the issue. Nowhere in the Constitution is the federal government given any authority to regulate activities such as online gambling. Arguing that “states rights” justifies creating new federal crimes turns the Tenth Amendment, which was intended to limit federal power, on its head.

Many supporters of an Internet gambling ban sincerely believe that gambling is an immoral and destructive activity that should be outlawed. However, the proposed legislation is not at all about the morality of gambling. It is about whether Americans who do gamble should have the choice to do so online, or be forced to visit brick-and-mortar casinos.

Even if there was some moral distinction between gambling online or in a physical casino, prohibiting behavior that does not involve force or fraud has no place in a free society. It is no more appropriate for gambling opponents to use force to stop people from playing poker online than it would be for me to use force to stop people from reading pro-war, neocon writers.

Giving government new powers over the Internet to prevent online gambling will inevitably threaten all of our liberties. Federal bureaucrats will use this new authority to expand their surveillance of the Internet activities of Americans who have no interest in gambling, just as they used the new powers granted by the PATRIOT Act to justify mass surveillance.

The proposed ban on Internet gambling is a blatantly unconstitutional infringement on our liberties that will likely expand the surveillance state. Worst of all, it is all being done for the benefit of one powerful billionaire. Anyone who thinks banning online gambling will not diminish our freedoms while enriching criminals is making a losing bet.

Ron Paul, MD, is a former three-time Republican candidate for U. S. President and Congressman from Texas.

This article is reprinted with permission from the Ron Paul Institute for Peace and Prosperity.

Ludd vs. Schumpeter: Fear of Robot Labor is Fear of the Free Market – Article by Wendy McElroy

Ludd vs. Schumpeter: Fear of Robot Labor is Fear of the Free Market – Article by Wendy McElroy

The New Renaissance Hat
Wendy McElroy
September 18, 2014
******************************

Report Suggests Nearly Half of U.S. Jobs Are Vulnerable to Computerization,” screams a headline. The cry of “robots are coming to take our jobs!” is ringing across North America. But the concern reveals nothing so much as a fear—and misunderstanding—of the free market.

In the short term, robotics will cause some job dislocation; in the long term, labor patterns will simply shift. The use of robotics to increase productivity while decreasing costs works basically the same way as past technological advances, like the production line, have worked. Those advances improved the quality of life of billions of people and created new forms of employment that were unimaginable at the time.

Given that reality, the cry that should be heard is, “Beware of monopolies controlling technology through restrictive patents or other government-granted privilege.”

The robots are coming!

Actually, they are here already. Technological advance is an inherent aspect of a free market in which innovators seeks to produce more value at a lower cost. Entrepreneurs want a market edge. Computerization, industrial control systems, and robotics have become an integral part of that quest. Many manual jobs, such as factory-line assembly, have been phased out and replaced by others, such jobs related to technology, the Internet, and games. For a number of reasons, however, robots are poised to become villains of unemployment. Two reasons come to mind:

1. Robots are now highly developed and less expensive. Such traits make them an increasingly popular option. The Banque de Luxembourg News offered a snapshot:

The currently-estimated average unit cost of around $50,000 should certainly decrease further with the arrival of “low-cost” robots on the market. This is particularly the case for “Baxter,” the humanoid robot with evolving artificial intelligence from the US company Rethink Robotics, or “Universal 5” from the Danish company Universal Robots, priced at just $22,000 and $34,000 respectively.

Better, faster, and cheaper are the bases of increased productivity.

2. Robots will be interacting more directly with the general public. The fast-food industry is a good example. People may be accustomed to ATMs, but a robotic kiosk that asks, “Do you want fries with that?” will occasion widespread public comment, albeit temporarily.

Comment from displaced fast-food restaurant workers may not be so transient. NBC News recently described a strike by workers in an estimated 150 cities. The workers’ main demand was a $15 minimum wage, but they also called for better working conditions. The protesters, ironically, are speeding up their own unemployment by making themselves expensive and difficult to manage.

Labor costs

Compared to humans, robots are cheaper to employ—partly for natural reasons and partly because of government intervention.

Among the natural costs are training, safety needs, overtime, and personnel problems such as hiring, firing and on-the-job theft. Now, according to Singularity Hub, robots can also be more productive in certain roles. They  “can make a burger in 10 seconds (360/hr). Fast yes, but also superior quality. Because the restaurant is free to spend its savings on better ingredients, it can make gourmet burgers at fast food prices.”

Government-imposed costs include minimum-wage laws and mandated benefits, as well as discrimination, liability, and other employment lawsuits. The employment advisory Workforce explained, “Defending a case through discovery and a ruling on a motion for summary judgment can cost an employer between $75,000 and $125,000. If an employer loses summary judgment—which, much more often than not, is the case—the employer can expect to spend a total of $175,000 to $250,000 to take a case to a jury verdict at trial.”

At some point, human labor will make sense only to restaurants that wish to preserve the “personal touch” or to fill a niche.

The underlying message of robotechnophobia

The tech site Motherboard aptly commented, “The coming age of robot workers chiefly reflects a tension that’s been around since the first common lands were enclosed by landowners who declared them private property: that between labour and the owners of capital. The future of labour in the robot age has everything to do with capitalism.”

Ironically, Motherboard points to one critic of capitalism who defended technological advances in production: none other than Karl Marx. He called machines “fixed capital.” The defense occurs in a segment called “The Fragment on Machines”  in the unfinished but published manuscript Grundrisse der Kritik der Politischen Ökonomie (Outlines of the Critique of Political Economy).

Marx believed the “variable capital” (workers) dislocated by machines would be freed from the exploitation of their “surplus labor,” the difference between their wages and the selling price of a product, which the capitalist pockets as profit. Machines would benefit “emancipated labour” because capitalists would “employ people upon something not directly and immediately productive, e.g. in the erection of machinery.” The relationship change would revolutionize society and hasten the end of capitalism itself.

Never mind that the idea of “surplus labor” is intellectually bankrupt, technology ended up strengthening capitalism. But Marx was right about one thing: Many workers have been emancipated from soul-deadening, repetitive labor. Many who feared technology did so because they viewed society as static. The free market is the opposite. It is a dynamic, quick-response ecosystem of value. Internet pioneer Vint Cerf argues, “Historically, technology has created more jobs than it destroys and there is no reason to think otherwise in this case.”

Forbes pointed out that U.S. unemployment rates have changed little over the past 120 years (1890 to 2014) despite massive advances in workplace technology:

There have been three major spikes in unemployment, all caused by financiers, not by engineers: the railroad and bank failures of the Panic of 1893, the bank failures of the Great Depression, and finally the Great Recession of our era, also stemming from bank failures. And each time, once the bankers and policymakers got their houses in order, businesses, engineers, and entrepreneurs restored growth and employment.

The drive to make society static is powerful obstacle to that restored employment. How does society become static? A key word in the answer is “monopoly.” But we should not equivocate on two forms of monopoly.

A monopoly established by aggressive innovation and excellence will dominate only as long as it produces better or less expensive goods than others can. Monopolies created by crony capitalism are entrenched expressions of privilege that serve elite interests. Crony capitalism is the economic arrangement by which business success depends upon having a close relationship with government, including legal privileges.

Restrictive patents are a basic building block of crony capitalism because they grant a business the “right” to exclude competition. Many libertarians deny the legitimacy of any patents. The nineteenth century classical liberal Eugen von Böhm-Bawerk rejected patents on classically Austrian grounds. He called them “legally compulsive relationships of patronage which are based on a vendor’s exclusive right of sale”: in short, a government-granted privilege that violated every man’s right to compete freely. Modern critics of patents include the Austrian economist Murray Rothbard and intellectual property attorney Stephan Kinsella.

Pharmaceuticals and technology are particularly patent-hungry. The extent of the hunger can be gauged by how much money companies spend to protect their intellectual property rights. In 2011, Apple and Google reportedly spent more on patent lawsuits and purchases than on research and development. A New York Times article addressed the costs imposed on tech companies by “patent trolls”—people who do not produce or supply services based on patents they own but use them only to collect licensing fees and legal settlements. “Litigation costs in the United States related to patent assertion entities [trolls],” the article claimed, “totaled nearly $30 billion in 2011, more than four times the costs in 2005.” These costs and associated ones, like patent infringement insurance, harm a society’s productivity by creating stasis and  preventing competition.

Dean Baker, co-director of the progressive Center for Economic Policy Research, described the difference between robots produced on the marketplace and robots produced by monopoly. Private producers “won’t directly get rich” because “robots will presumably be relatively cheap to make. After all, we can have robots make them. If the owners of robots get really rich it will be because the government has given them patent monopolies so that they can collect lots of money from anyone who wants to buy or build a robot.”  The monopoly “tax” will be passed on to impoverish both consumers and employees.

Conclusion

Ultimately, we should return again to the wisdom of Joseph Schumpeter, who reminds us that technological progress, while it can change the patterns of production, tends to free up resources for new uses, making life better over the long term. In other words, the displacement of workers by robots is just creative destruction in action. Just as the car starter replaced the buggy whip, the robot might replace the burger-flipper. Perhaps the burger-flipper will migrate to a new profession, such as caring for an elderly person or cleaning homes for busy professionals. But there are always new ways to create value.

An increased use of robots will cause labor dislocation, which will be painful for many workers in the near term. But if market forces are allowed to function, the dislocation will be temporary. And if history is a guide, the replacement jobs will require skills that better express what it means to be human: communication, problem-solving, creation, and caregiving.

Wendy McElroy (wendy@wendymcelroy.com) is an author, editor of ifeminists.com, and Research Fellow at The Independent Institute (independent.org).

This article was originally published by The Foundation for Economic Education.