Browsed by
Tag: computers

Majoritarian Processes versus Open Playing Fields – Video by G. Stolyarov II

Majoritarian Processes versus Open Playing Fields – Video by G. Stolyarov II

Putting innovation to a vote is never a good idea. Consider the breakthroughs that have improved our lives the most during the 20th and early 21st centuries. Did anyone vote for or ordain the creation of desktop PCs, the Internet, smartphones, or tablet computers?

It is only when some subset of reality is a fully open playing field, away from the notice of vested interests or their ability to control it, that innovation can emerge in a sufficiently mature and pervasive form that any attempts to suffocate it politically become seen as transparently immoral and protectionist.

All major improvements to our lives come from these open playing fields.

References
– “Putting Innovation to a Vote? Majoritarian Processes versus Open Playing Fields” – Essay by G. Stolyarov II
– “Satoshi Nakamoto” – Wikipedia
The Seasteading Institute

Putting Innovation to a Vote? Majoritarian Processes versus Open Playing Fields – Article by G. Stolyarov II

Putting Innovation to a Vote? Majoritarian Processes versus Open Playing Fields – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
February 4, 2014
******************************

Putting innovation to a vote is never a good idea. Consider the breakthroughs that have improved our lives the most during the 20th and early 21st centuries. Did anyone vote for or ordain the creation of desktop PCs, the Internet, smartphones, or tablet computers? No: that plethora of technological treasures was made available by individuals who perceived possibilities unknown to the majority, and who devoted their time, energy, and resources toward making those possibilities real. The electronic technologies which were unavailable to even the richest, most powerful men of the early 20th century now open up hitherto unimaginable possibilities even to children of poor families in Sub-Saharan Africa.

On the other hand, attempts to innovate through majority decisions, either by lawmakers or by the people directly, have failed to yield fruit. Although virtually everyone would consider education, healthcare, and defense to be important, fundamental objectives, the goals of universal cultivation of learning, universal access to healthcare, and universal security against crime and aggression have not been fulfilled, in spite of massive, protracted, and expensive initiatives throughout the Western world to achieve them. While it is easy even for people of little means to experience any art, music, literature, films, and games they desire, it can be extremely difficult for even a person of ample means to receive the effective medical care, high-quality formal education, and assurance of safety from both criminals and police brutality that virtually anyone would desire.

Why is it the case that, in the essentials, the pace of progress has been far slower than in the areas most people would deem to be luxuries or entertainment goods? Why is it that the greatest progress in the areas treated by most as direct priorities comes as a spillover benefit from the meteoric growth in the original luxury/entertainment areas? (Consider, as an example, the immense benefits that computers have brought to medical research and patient care, or the vast possibilities for using the Internet as an educational tool.) In the areas from which the eye of formal decision-making systems is turned away, experimentation can commence, and courageous thinkers and tinkerers can afford to iterate without asking permission. So teenagers experimenting in their garages can create computer firms that shape the economy of a generation. So a pseudonymous digital activist, Satoshi Nakamoto, can invent a cryptocurrency algorithm that no central bank or legislature would have allowed to emerge at a proposal stage – but which all governments of the world must now accept as a fait accompli that is not going away.

Most people without political connections or strong anti-free-enterprise ideologies welcome these advances, but no such breakthroughs can occur if they need to be cleared through a formal majoritarian system of any stripe. A majoritarian system, vulnerable to domination by special interests who benefit from the economic and societal arrangements of the status quo, does not welcome their disruption. Most individuals have neither the power nor the tenacity to shepherd through the political process an idea that would be merely a nice addition rather than an urgent necessity. On the other hand, the vested and connected interests whose revenue streams, influence, and prestige would be disrupted by the innovation have every incentive to manipulate the political process and thwart the innovations they can anticipate.

It is only when some subset of reality is a fully open playing field, away from the notice of vested interests or their ability to control it, that innovation can emerge in a sufficiently mature and pervasive form that any attempts to suffocate it politically become seen as transparently immoral and protectionist. The open playing field can be any area that is simply of no interest to the established powers – as could be said of personal computers through the 1990s. Eventually, these innovations evolve so dramatically as to upturn the major economic and social structures underpinning the establishment of a given era. The open playing field can be a jurisdiction more welcoming to innovators than its counterparts, and beyond the reach of innovation’s staunchest opponents. Seasteading, for example, would enable more competition among jurisdictions, and is particularly promising as a way of generating more such open playing fields. The open playing field can be an entirely new area of human activity where the power structures are so fluid that staid, entrenched interests have not yet had time to emerge. The early days of the Internet and of cryptocurrencies are examples of these kinds of open playing fields. The open playing field can even occur after a major upheaval has dislodged most existing power structures, as occurred in Japan after World War II, when decades of immense progress in technology and infrastructure followed the toppling of the former militaristic elite by the United States.

The beneficent effect of the open playing field is made possible not merely due to the lack of formal constraints, but also due to the lack of constraints on human thinking within the open playing field. When the world is fresh and new, and anything seems possible, human ingenuity tends to rise to the occasion. If, on the other hand, every aspect of life is hyper-regimented and weighed down by the precedents, edicts, compromises, and traditions of era upon era – even with the best intentions toward optimization, justice, or virtue – the existing strictures constrain most people’s view of what can be achieved, and even the innovators will largely struggle to achieve slight tweaks to the status quo rather than the kind of paradigm-shifting change that propels civilization forward and upward. In struggling to conform to or push against the tens of thousands of prescriptions governing mundane life, people lose sight of astonishing futures that might be.

The open playing fields may not be for everyone, but they should exist for anyone who wishes to test a peaceful vision for the future.  Voting works reasonably well in the Western world (most of the time) when it comes to selecting functionaries for political office, or when it is an instrument within a deliberately gridlocked Constitutional system designed to preserve the fundamental rules of the game rather than to prescribe each player’s move. But voting is a terrible mechanism for invention or creativity; it reduces the visions of the best and brightest – the farthest-seeing among us – to the myopia of the median voter. This is why you should be glad that nobody voted on the issue of whether we should have computers, or connect them to one another, or experiment with stores of value in a bit of code. Instead, you should find (or create!) an open playing field and give your own designs free rein.

Cryptocurrencies as a Single Pool of Wealth – Video by G. Stolyarov II

Cryptocurrencies as a Single Pool of Wealth – Video by G. Stolyarov II

Mr. Stolyarov offers economic thoughts as to the purchasing power of decentralized electronic currencies, such as Bitcoin, Litecoin, and Dogecoin.

When considering the real purchasing power of the new cryptocurrencies, we should be looking not at Bitcoin in isolation, but at the combined pool of all cryptocurrencies in existence. In a world of many cryptocurrencies and the possibility of the creation of new cryptocurrencies, a single Bitcoin will purchase less than it could have purchased in a world where Bitcoin was the only possible cryptocurrency.

References

– “Cryptocurrencies as a Single Pool of Wealth: Thoughts on the Purchasing Power of Decentralized Electronic Money” – Essay by G. Stolyarov II

– Donations to Mr. Stolyarov via The Rational Argumentator:
Bitcoin – 1J2W6fK4oSgd6s1jYr2qv5WL8rtXpGRXfP
Dogecoin – DCgcDZnTAhoPPkTtNGNrWwwxZ9t5etZqUs

– “2013: Year Of The Bitcoin” – Kitco News – Forbes Magazine – December 10, 2013
– “Bitcoin” – Wikipedia
– “Litecoin” – Wikipedia
– “Namecoin” – Wikipedia
– “Peercoin” – Wikipedia
– “Dogecoin” – Wikipedia
– “Tulip mania” – Wikipedia
– “Moore’s Law” – Wikipedia

The Theory of Money and Credit (1912) – Ludwig von Mises

Cryptocurrencies as a Single Pool of Wealth: Thoughts on the Purchasing Power of Decentralized Electronic Money – Article by G. Stolyarov II

Cryptocurrencies as a Single Pool of Wealth: Thoughts on the Purchasing Power of Decentralized Electronic Money – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
January 12, 2014
******************************

The recent meteoric rise in the dollar price of Bitcoin – from around $12 at the beginning of 2013 to several peaks above $1000 at the end – has brought widespread attention to the prospects for and future of cryptocurrencies. I have no material stake in Bitcoin (although I do accept donations), and this article will not attempt to predict whether the current price of Bitcoin signifies mostly lasting value or a bubble akin to the Dutch tulip mania of the 1630s. Instead of speculation about any particular price level, I hope here to establish a principle pertaining to the purchasing power of cryptocurrencies in general, since Bitcoin is no longer the only one.

Although Bitcoin, developed in 2009 by the pseudonymous Satoshi Namakoto, has the distinction and advantage of having been the first cryptocurrency to gain widespread adoption, others, such as Litecoin (2011), Namecoin (2011), Peercoin (2012), and even Dogecoin (2013) – the first cryptocurrency based on an Internet meme – have followed suit. Many of these cryptocurrencies’ fundamental elements are similar. Litecoin’s algorithm is nearly identical to Bitcoin (with the major difference being the fourfold increase in the rate of block processing and transaction confirmation), and the Dogecoin algorithm is the same as that of Litecoin. The premise behind each cryptocurrency is a built-in deflation; the rate of production slows with time, and only 21 million Bitcoins could ever be “mined” electronically. The limit for the total pool of Litecoins is 84 million, whereas the total Dogecoins in circulation will approach an asymptote of 100 billion.

Bitcoin-coins Namecoin_Coin Dogecoin_logoLitecoin_Logo

The deflationary mechanism of each cryptocurrency is admirable; it is an attempt to preserve real purchasing power. With fiat paper money printed by an out-of-control central bank, an increase in the number and denomination of papers (or their electronic equivalents) circulating in the economy will not increase material prosperity or the abundance of real goods; it will only raise the prices of goods in terms of fiat-money quantities. Ludwig von Mises, in his 1912 Theory of Money and Credit, outlined the redistributive effects  of inflation; those who get the new money first (typically politically connected cronies and the institutions they control) will gain in real purchasing power, while those to whom the new money spreads last will lose. Cryptocurrencies are independent of any central issuer (although different organizations administer the technical protocols of each cryptocurrency) and so are not vulnerable to such redistributive inflationary pressures induced by political considerations. This is the principal advantage of cryptocurrencies over any fiat currency issued by a governmental or quasi-governmental central bank. Moreover, the real expenditure of resources (computer hardware and electricity) for mining cryptocurrencies provides a built-in scarcity that further restricts the possibility of inflation.

Yet there is another element to consider. Virtually any major cryptocurrency can be exchanged freely for any other (with some inevitable but minor transaction costs and spreads) as well as for national fiat currencies (with higher transaction costs in both time and money). For instance, on January 12, 2014, one Bitcoin could trade for approximately $850, while one Litecoin could trade for approximately $25, implying an exchange rate of 34 Litecoins per Bitcoin. Due to the similarity in the technical specifications of each cryptocurrency (similar algorithms, similar built-in scarcity, ability to be mined by the same computer hardware, and similar decentralized, distributed generation), any cryptocurrency could theoretically serve an identical function to any other. (The one caveat to this principle is that any future cryptocurrency algorithm that offers increased security from theft could crowd out the others if enough market participants come to recognize it as offering more reliable protection against hackers and fraudsters than the current Bitcoin algorithm and Bitcoin-oriented services do.)  Moreover, any individual or organization with sufficient resources and determination could initiate a new cryptocurrency, much as Billy Markus initiated Dogecoin in part with the intent to provide an amusing reaction to the Bitcoin price crash in early December 2013.

This free entry into the cryptocurrency-creation market, combined with the essential similarity of all cryptocurrencies to date and the ability to readily exchange any one for any other, suggests that we should not be considering the purchasing power of Bitcoin in isolation. Rather, we should view all cryptocurrencies combined as a single pool of wealth. The total purchasing power of this pool of cryptocurrencies in general would depend on a multitude of real factors, including the demand among the general public for an alternative to governmental fiat currencies and the ease with which cryptocurrencies facilitate otherwise cumbersome or infeasible financial transactions. In other words, the properties of cryptocurrencies as stores of value and media of exchange would ultimately determine how much they could purchase, and the activities of arbitrageurs among the cryptocurrencies would tend to produce exchange rates that mirror the relative volumes of each cryptocurrency in existence. For instance, if we make the simplifying assumption that the functional properties of Bitcoin and Litecoin are identical for the practical purposes of users, then the exchange rate between Bitcoins and Litecoins should asymptotically approach 1 Bitcoin to 4 Litecoins, since this will be the ultimate ratio of the number of units of these cryptocurrencies. Of course, at any given time, the true ratio will vary, because each cryptocurrency was initiated at a different time, each has a different amount of computer hardware devoted to mining it, and none has come close to approaching its asymptotic volume.

 What implication does this insight have for the purchasing power of Bitcoin? In a world of many cryptocurrencies and the possibility of the creation of new cryptocurrencies, a single Bitcoin will purchase less than it could have purchased in a world where Bitcoin was the only possible cryptocurrency.  The degree of this effect depends on how many cryptocurrencies are in existence. This, in turn, depends on how many new cryptocurrency models or creative tweaks to existing cryptocurrency models are originated – since it is reasonable to posit that users will have little motive to switch from a more established cryptocurrency to a completely identical but less established cryptocurrency, all other things being equal. If new cryptocurrencies are originated with greater rapidity than the increase in the real purchasing power of cryptocurrencies in total, inflation may become a problem in the cryptocurrency world. The real bulwark against cryptocurrency inflation, then, is not the theoretical upper limit on any particular cryptocurrency’s volume, but rather the practical limitations on the amount of hardware that can be devoted to mining all cryptocurrencies combined. Will the scarcity of mining effort, in spite of future exponential advances in computer processing power in accordance with Moore’s Law, sufficiently restrain the inflationary pressures arising from human creativity in the cryptocurrency arena? Only time will tell.

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

Transhumanism, Technology, and Science: To Say It’s Impossible Is to Mock History Itself – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 30, 2013
******************************
One of the most common arguments made against Transhumanism, Technoprogressivism, and the transformative potentials of emerging, converging, disruptive and transformative technologies may also be the weakest: technical infeasibility. While some thinkers attack the veracity of Transhumanist claims on moral grounds, arguing that we are committing a transgression against human dignity (in turn often based on ontological grounds of a static human nature that shan’t be tampered with) or on grounds of safety, arguing that humanity isn’t responsible enough to wield such technologies without unleashing their destructive capabilities, these categories of counter-argument (efficacy and safety, respectively) are more often than not made by people somewhat more familiar with the community and its common points of rhetoric.
***
In other words these are the real salient and significant problems needing to be addressed by Transhumanist and Technoprogressive communities. The good news is that the ones making the most progress in terms of deliberating the possible repercussions of emerging technologies are Transhumanist and Technoprogressive communities. The large majority of thinkers and theoreticians working on Existential Risk and Global Catastrophic Risk, like The Future of Humanity Institute and the Lifeboat Foundation, share Technoprogressive inclinations. Meanwhile, the largest proponents of the need to ensure wide availability of enhancement technologies, as well as the need for provision of personhood rights to non-biologically-substrated persons, are found amidst the ranks of Technoprogressive Think Tanks like the IEET.
***

A more frequent Anti-Transhumanist and Anti-Technoprogressive counter-argument, by contrast, and one most often launched by people approaching Transhumanist and Technoprogressive communities from the outside, with little familiarity with their common points of rhetoric, is the claim of technical infeasibility based upon little more than sheer incredulity.

Sometimes a concept or notion simply seems too unprecedented to be possible. But it’s just too easy for us to get stuck in a spacetime rut along the continuum of culture and feel that if something were possible, it would have either already happened or would be in the final stages of completion today. “If something is possible, when why hasn’t anyone done it Shouldn’t the fact that it has yet to be accomplished indicate that it isn’t possible?” This conflates ought with is (which Hume showed us is a fallacy) and ought with can. Ought is not necessarily correlative with either. At the risk of saying the laughably-obvious, something must occur at some point in order for it to occur at all. The Moon landing happened in 1969 because it happened in 1969, and to have argued in 1968 that it simply wasn’t possible solely because it had never been done before would not have been  a valid argument for its technical infeasibility.

If history has shown us anything, it has shown us that history is a fantastically poor indicator of what will and will not become feasible in the future. Statistically speaking, it seems as though the majority of things that were said to be impossible to implement via technology have nonetheless come into being. Likewise, it seems as though the majority of feats it was said to be possible to facilitate via technology have also come into being. The ability to possiblize the seemingly impossible via technological and methodological in(ter)vention has been exemplified throughout the course of human history so prominently that we might as well consider it a statistical law.

We can feel the sheer fallibility of the infeasibility-from-incredulity argument intuitively when we consider how credible it would have seemed a mere 100 years ago to claim that we would soon be able to send sentences into the air, to be routed to a device in your pocket (and only your pocket, not the device in the pocket of the person sitting right beside you). How likely would it have seemed 200 years ago if you claimed that 200 years hence it would be possible to sit comfortably and quietly in a chair in the sky, inside a large tube of metal that fails to fall fatally to the ground?

Simply look around you. An idiosyncratic genus of great ape did this! Consider how remarkably absurd it would seem for the gorilla genus to have coordinated their efforts to build skyscrapers; to engineer devices that took them to the Moon; to be able to send a warning or mating call to the other side of the earth in less time than such a call could actually be made via physical vocal cords. We live in a world of artificial wonder, and act as though it were the most mundane thing in the world. But considered in terms of geological time, the unprecedented feat of culture and artificial artifact just happened. We are still in the fledging infancy of the future, which only began when we began making it ourselves.
***

We have no reason whatsoever to doubt the eventual technological feasibility of anything, really, when we consider all the things that were said to be impossible yet happened, all the things that were said to be possible and did happen, and all the things that were unforeseen completely yet happened nonetheless. In light of history, it seems more likely than a given thing would eventually be possible via technology than that it wouldn’t ever be possible. I fully appreciate the grandeur of this claim – but I stand by it nonetheless. To claim that a given ability will probably not be eventually possible to implement via technology is to laugh in the face of history to some extent.

The main exceptions to this claim are abilities wherein you limit or specify the route of implementation. Thus it probably would not be eventually possible to, say, infer the states of all the atoms comprising the Eifel Tower from the state of a single atom in your fingernail: categories of ability where you specify the implementation as the end-ability – as in the case above, the end ability was to infer the state of all the atoms in the Eifel Tower from the state of a single atom.

These exceptions also serve to illustrate the paramount feature allowing technology to possiblize the seemingly improbable: novel means of implementation. Very often there is a bottleneck in the current system we use to accomplish something that limits the scope of tis abilities and prevents certain objectives from being facilitated by it. In such cases a whole new paradigm of approach is what moves progress forward to realizing that objective. If the goal is the reversal and indefinite remediation of the causes and sources of aging, the paradigms of medicine available at the turn of the 20th century would have seemed to be unable to accomplish such a feat.

The new paradigm of biotechnology and genetic engineering was needed to formulate a scientifically plausible route to the reversal of aging-correlated molecular damage – a paradigm somewhat non-inherent in the medical paradigms and practices common at the turn of the 20th Century. It is the notion of a new route to implementation, a wholly novel way of making the changes that could lead to a given desired objective, that constitutes the real ability-actualizing capacity of technology – and one that such cases of specified implementation fail to take account of.

One might think that there are other clear exceptions to this as well: devices or abilities that contradict the laws of physics as we currently understand them – e.g., perpetual-motion machines. Yet even here we see many historical antecedents exemplifying our short-sighted foresight in regard to “the laws of physics”. Our understanding of the physical “laws” of the universe undergo massive upheaval from generation to generation. Thomas Kuhn’s The Structure of Scientific Revolutions challenged the predominant view that scientific progress occurred by accumulated development and discovery when he argued that scientific progress is instead driven by the rise of new conceptual paradigms categorically dissimilar to those that preceded it (Kuhn, 1962), and which then define the new predominant directions in research, development, and discovery in almost all areas of scientific discovery and conceptualization.

Kuhn’s insight can be seen to be paralleled by the recent rise in popularity of Singularitarianism, which today seems to have lost its strict association with I.J. Good‘s posited type of intelligence explosion created via recursively self-modifying strong AI, and now seems to encompass any vision of a profound transformation of humanity or society through technological growth, and the introduction of truly disruptive emerging and converging (e.g., NBIC) technologies.

This epistemic paradigm holds that the future is less determined by the smooth progression of existing trends and more by the massive impact of specific technologies and occurrences – the revolution of innovation. Kurzweil’s own version of Singularitarianism (Kurzweil, 2005) uses the systemic progression of trends in order to predict a state of affairs created by the convergence of such trends, wherein the predictable progression of trends points to their own destruction in a sense, as the trends culminate in our inability to predict past that point. We can predict that there are factors that will significantly impede our predictive ability thereafter. Kurzweil’s and Kuhn’s thinking are also paralleled by Buckminster Fuller in his notion of ephemeralization (i.e., doing more with less), the post-industrial information economies and socioeconomic paradigms described by Alvin Toffler (Toffler, 1970), John Naisbitt (Naisbitt 1982), and Daniel Bell (Bell, 1973), among others.

It can also partly be seen to be inherent in almost all formulations of technological determinism, especially variants of what I call reciprocal technological determinism (not simply that technology determines or largely constitutes the determining factors of societal states of affairs, not simply that tech affects culture, but rather than culture affects technology which then affects culture which then affects technology) a là Marshall McLuhan (McLuhan, 1964) . This broad epistemic paradigm, wherein the state of progress is more determined by small but radically disruptive changes, innovation, and deviations rather than the continuation or convergence of smooth and slow-changing trends, can be seen to be inherent in variants of technological determinism because technology is ipso facto (or by its very defining attributes) categorically new and paradigmically disruptive, and if culture is affected significantly by technology, then it is also affected by punctuated instances of unintended radical innovation untended by trends.

That being said, as Kurzweil has noted, a given technological paradigm “grows out of” the paradigm preceding it, and so the extents and conditions of a given paradigm will to some extent determine the conditions and allowances of the next paradigm. But that is not to say that they are predictable; they may be inherent while still remaining non-apparent. After all, the increasing trend of mechanical components’ increasing miniaturization could be seen hundreds of years ago (e.g., Babbage knew that the mechanical precision available via the manufacturing paradigms of his time would impede his ability in realizing his Baggage Engine, but that its implementation would one day be possible by the trend of increasingly precise manufacturing standards), but the fact that it could continue to culminate in the ephemeralization of Bucky Fuller (Fuller, 1976) or the mechanosynthesis of K. Eric Drexler (Drexler, 1986).

Moreover, the types of occurrence allowed by a given scientific or methodological paradigm seem at least intuitively to expand, rather than contract, as we move forward through history. This can be seen lucidly in the rise of Quantum Physics in the early 20th Century, which delivered such conceptual affronts to our intuitive notions of the possible as non-locality (i.e., quantum entanglement – and with it quantum information teleportation and even quantum energy teleportation, or in other words faster-than-light causal correlation between spatially separated physical entities), Einstein’s theory of relativity (which implied such counter-intuitive notions as measurement of quantities being relative to the velocity of the observer, e.g., the passing of time as measured by clocks will be different in space than on earth), and the hidden-variable theory of David Bohm (which implied such notions as the velocity of any one particle being determined by the configuration of the entire universe). These notions belligerently contradict what we feel intuitively to be possible. Here we have claims that such strange abilities as informational and energetic teleportation, faster-than-light causality (or at least faster-than-light correlation of physical and/or informational states) and spacetime dilation are natural, non-technological properties and abilities of the physical universe.

Technology is Man’s foremost mediator of change; it is by and large through the use of technology that we expand the parameters of the possible. This is why the fact that these seemingly fantastic feats were claimed to be possible “naturally”, without technological implementation or mediation, is so significant. The notion that they are possible without technology makes them all the more fantastical and intuitively improbable.

We also sometimes forget the even more fantastic claims of what can be done through the use of technology, such as stellar engineering and mega-scale engineering, made by some of big names in science. There is the Dyson Sphere of Freeman Dyson, which details a technological method of harnessing potentially the entire energetic output of a star (Dyson,  1960). One can also find speculation made by Dyson concerning the ability for “life and communication [to] continue for ever, using a finite store of energy” in an open universe by utilizing smaller and smaller amounts of energy to power slower and slower computationally emulated instances of thought (Dyson, 1979).

There is the Tipler Cylinder (also called the Tipler Time Machine) of Frank J. Tipler, which described a dense cylinder of infinite length rotating about its longitudinal axis to create closed timelike curves (Tipler, 1974). While Tipler speculated that a cylinder of finite length could produce the same effect if rotated fast enough, he didn’t provide a mathematical solution for this second claim. There is also speculation by Tipler on the ability to utilize energy harnessed from gravitational shear created by the forced collapse of the universe at different rates and different directions, which he argues would allow the universe’s computational capacity to diverge to infinity, essentially providing computationally emulated humans and civilizations the ability to run for an infinite duration of subjective time (Tipler, 1986, 1997).

We see such feats of technological grandeur paralleled by Kurt Gödel, who produced an exact solution to the Einstein field equations that describes a cosmological model of a rotating universe (Gödel, 1949). While cosmological evidence (e.g., suggesting that our universe is not a rotating one) indicates that his solution doesn’t describe the universe we live in, it nonetheless constitutes a hypothetically possible cosmology in which time-travel (again, via a closed timelike curve) is possible. And because closed timelike curves seem to require large amounts of acceleration – i.e. amounts not attainable without the use of technology – Gödel’s case constitutes a hypothetical cosmological model allowing for technological time-travel (which might be non-obvious, since Gödel’s case doesn’t involve such technological feats as a rotating cylinder of infinite length, rather being a result derived from specific physical and cosmological – i.e., non-technological – constants and properties).

These are large claims made by large names in science (i.e., people who do not make claims frivolously, and in most cases require quantitative indications of their possibility, often in the form of mathematical solutions, as in the cases mentioned above) and all of which are made possible solely through the use of technology. Such technological feats as the computational emulation of the human nervous system and the technological eradication of involuntary death pale in comparison to the sheer grandeur of the claims and conceptualizations outlined above.

We live in a very strange universe, which is easy to forget midst our feigned mundanity. We have no excuse to express incredulity at Transhumanist and Technoprogressive conceptualizations considering how stoically we accept such notions as the existence of sentient matter (i.e., biological intelligence) or the ability of a genus of great ape to stand on extraterrestrial land.

Thus, one of the most common counter-arguments launched at many Transhumanist and Technoprogressive claims and conceptualizations – namely, technical infeasibility based upon nothing more than incredulity and/or the lack of a definitive historical precedent – is one of the most baseless counter-arguments as well. It would be far more credible to argue for the technical infeasibility of a given endeavor within a certain time-frame. Not only do we have little, if any, indication that a given ability or endeavor will fail to eventually become realizable via technology given enough development-time, but we even have historical indication of the very antithesis of this claim, in the form of the many, many instances in which a given endeavor or feat was said to be impossible, only to be realized via technological mediation thereafter.

It is high time we accepted the fallibility of base incredulity and the infeasibility of the technical-infeasibility argument. I remain stoically incredulous at the audacity of fundamental incredulity, for nothing should be incredulous to man, who makes his own credibility in any case, and who is most at home in the necessary superfluous.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

References

Bell, D. (1973). “The Coming of Post-Industrial Society: A Venture in Social Forecasting, Daniel Bell.” New York: Basic Books, ISBN 0-465-01281-7.

Dyson, F. (1960) “Search for Artificial Stellar Sources of Infrared Radiation”. Science 131: 1667-1668.

Dyson, F. (1979). “Time without end: Physics and biology in an open universe,” Reviews of Modern Physics 51 (3): 447-460.

Fuller, R.B. (1938). “Nine Chains to the Moon.” Anchor Books pp. 252–59.

Gödel, K. (1949). “An example of a new type of cosmological solution of Einstein’s field equations of gravitation”. Rev. Mod. Phys. 21 (3): 447–450.

Kuhn, Thomas S. (1962). “The Structure of Scientific Revolutions (1st ed.).” University of Chicago Press. LCCN 62019621.

Kurzweil, R. (2005). “The Singularity is Near.” Penguin Books.

Mcluhan, M. (1964). “Understanding Media: The Extensions of Man”. 1st Ed. McGraw Hill, NY.

Niasbitt, J. (1982). “Megatrends.” Ten New Directions Transforming Our Lives. Warner Books.

Tipler, F. (1974) “Rotating Cylinders and Global Causality Violation”. Physical Review D9, 2203-2206.

Tipler, F. (1986). “Cosmological Limits on Computation”, International Journal of Theoretical Physics 25 (6): 617-661.

Tipler, F. (1997). The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead. New York: Doubleday. ISBN 0-385-46798-2.

Toffler, A. (1970). “Future shock.” New York: Random House.