Browsed by
Tag: computer

U.S. Transhumanist Party Discussion Panel on Aritificial Intelligence – January 8, 2017

U.S. Transhumanist Party Discussion Panel on Aritificial Intelligence – January 8, 2017

The New Renaissance Hat

The U.S. Transhumanist Party’s first expert discussion panel, hosted in conjunction with the Nevada Transhumanist Party, asked panelists to consider emerging developments in artificial intelligence.

The panel took place on Sunday, January 8, 2017, at 10 a.m. U.S. Pacific Time.

This panel was moderated by Gennady Stolyarov II, Chairman of the U.S. Transhumanist Party and Chief Executive of the Nevada Transhumanist Party. Key questions addressed include the following:

(i) What do you think will be realistic, practical applications of artificial intelligence toward improving human lives during the next 5 years?
(ii) Are you genuinely concerned about existential risk stemming from AI, or do you think those concerns are exaggerated / overhyped (or do you have some intermediate position on these issues)?
(iii) On the other hand, do you perceive significant tendencies in contemporary culture to overhype the positive / functional capabilities of AI?
(iv) How can individuals, particularly laypersons, become better at distinguishing between genuine scientific and technological advances in AI and hype / fear-mongering?
(v) What is your techno-optimistic vision for how AI can help improve the future of human (and transhuman) beings?
(vi) What are your thoughts regarding prognostications of an AI-caused technological Singularity? Are they realistic?

Panelists

Zak Field is an international speaker, consultant, games designer, and entrepreneur based in Norwich, UK. A rising thought leader in Mixed Realities (VR/AR), Zak speaks and consults on Mixed Realities-related topics like gamification, Virtual Reality (VR), Augmented Reality (AR), Robotics, Artificial Intelligences (AIs), and the Internet of Things (IoT).

In 2015, Zak partnered with Futurist Miss Metaverse as co-founder of BodAi, a robotics and AI company developing Bods, lifelike humanoid robot companions made accessible through a unique system that accommodates practical 21st-Century business and lifestyle needs.

David J. Kelley is the CTO for the tech venture capital firm Tracy Hall LLC, focused on companies that contribute to high-density sustainable community technologies, as well as the principal scientist with Artificial General Intelligence Inc. David also volunteers as the Chairman of the Transhuman National Committee board. David’s career has been built on technology trends and bleeding each research primarily around the capitalization of product engineering where those new products can be brought to market and made profitable. David’s work on Artificial Intelligence in particular – the ICOM research project with AGI Inc. – is focused on emotion-based systems that are designed to work around human constraints and help remove the ‘human’ element from the design of AI systems, including military applications for advanced self-aware cognitive systems that do not need human interaction.

Hiroyuki Toyama is a Japanese doctoral student at the Department of Psychology in University of Jyväskylä, Finland. His doctoral study has focused on emotional intelligence (EI) in the context of personality and health psychology. In particular, he has attempted to shed light on the way in which trait EI is related to subjective well-being and physiological health. He has a great interest in the future development of artificial EI on the basis of contemporary theory of EI.

Mark Waser is Chief Technology Officer of the Digital Wisdom Institute and D161T4L W15D0M Inc., organizations devoted to the ethical implementation of advanced technologies for the benefit of all. He has been publishing data science research since 1983 and developing commercial AI software since 1984, including an expert system shell and builder for Citicorp, a neural network to evaluate thallium cardiac images for Air Force pilots and, recently, mobile front-ends for cloud-based AI and data science. He is particularly interested in safe ethical architectures and motivational systems for intelligent machines (including humans). As an AI ethicist, he has presented at numerous conferences and published articles in international journals. His current projects can be found at the Digital Wisdom website – http://wisdom.digital/

Demian Zivkovic is CEO+Structure of Ascendance Biomedical, president of the Institute of Exponential Sciences, as well as a scholar of several scientific disciplines. He has been interested in science, particularly neuropsychology, astronomy, and biology from a very young age. His greatest passions are cognitive augmentation and life extension, two endeavors he remains deeply committed to, to this day. He is also very interested in applications of augmented reality and hyperreality, which he believes have incredible potential for improving our lives.

He is a strong believer in interdisciplinarity as a paradigm for understanding the world. His studies span artificial intelligence, innovation science, and business, which he has studied at the University of Utrecht. He also has a background in psychology, which he has previously studied at the Saxion University of Applied Sciences. Demian has co-founded Ascendance Biomedical, a Singapore-based company focused on cutting edge biomedical services. Demian believes that raising capital and investing in technology and education is the best route to facilitate societal change. As a staunch proponent of LGBT rights and postgenderism, Demian believes advanced technologies can eventually provide a definite solution for sex/gender-related issues in society.

Cryptocurrencies as a Single Pool of Wealth – Video by G. Stolyarov II

Cryptocurrencies as a Single Pool of Wealth – Video by G. Stolyarov II

Mr. Stolyarov offers economic thoughts as to the purchasing power of decentralized electronic currencies, such as Bitcoin, Litecoin, and Dogecoin.

When considering the real purchasing power of the new cryptocurrencies, we should be looking not at Bitcoin in isolation, but at the combined pool of all cryptocurrencies in existence. In a world of many cryptocurrencies and the possibility of the creation of new cryptocurrencies, a single Bitcoin will purchase less than it could have purchased in a world where Bitcoin was the only possible cryptocurrency.

References

– “Cryptocurrencies as a Single Pool of Wealth: Thoughts on the Purchasing Power of Decentralized Electronic Money” – Essay by G. Stolyarov II

– Donations to Mr. Stolyarov via The Rational Argumentator:
Bitcoin – 1J2W6fK4oSgd6s1jYr2qv5WL8rtXpGRXfP
Dogecoin – DCgcDZnTAhoPPkTtNGNrWwwxZ9t5etZqUs

– “2013: Year Of The Bitcoin” – Kitco News – Forbes Magazine – December 10, 2013
– “Bitcoin” – Wikipedia
– “Litecoin” – Wikipedia
– “Namecoin” – Wikipedia
– “Peercoin” – Wikipedia
– “Dogecoin” – Wikipedia
– “Tulip mania” – Wikipedia
– “Moore’s Law” – Wikipedia

The Theory of Money and Credit (1912) – Ludwig von Mises

Cryptocurrencies as a Single Pool of Wealth: Thoughts on the Purchasing Power of Decentralized Electronic Money – Article by G. Stolyarov II

Cryptocurrencies as a Single Pool of Wealth: Thoughts on the Purchasing Power of Decentralized Electronic Money – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
January 12, 2014
******************************

The recent meteoric rise in the dollar price of Bitcoin – from around $12 at the beginning of 2013 to several peaks above $1000 at the end – has brought widespread attention to the prospects for and future of cryptocurrencies. I have no material stake in Bitcoin (although I do accept donations), and this article will not attempt to predict whether the current price of Bitcoin signifies mostly lasting value or a bubble akin to the Dutch tulip mania of the 1630s. Instead of speculation about any particular price level, I hope here to establish a principle pertaining to the purchasing power of cryptocurrencies in general, since Bitcoin is no longer the only one.

Although Bitcoin, developed in 2009 by the pseudonymous Satoshi Namakoto, has the distinction and advantage of having been the first cryptocurrency to gain widespread adoption, others, such as Litecoin (2011), Namecoin (2011), Peercoin (2012), and even Dogecoin (2013) – the first cryptocurrency based on an Internet meme – have followed suit. Many of these cryptocurrencies’ fundamental elements are similar. Litecoin’s algorithm is nearly identical to Bitcoin (with the major difference being the fourfold increase in the rate of block processing and transaction confirmation), and the Dogecoin algorithm is the same as that of Litecoin. The premise behind each cryptocurrency is a built-in deflation; the rate of production slows with time, and only 21 million Bitcoins could ever be “mined” electronically. The limit for the total pool of Litecoins is 84 million, whereas the total Dogecoins in circulation will approach an asymptote of 100 billion.

Bitcoin-coins Namecoin_Coin Dogecoin_logoLitecoin_Logo

The deflationary mechanism of each cryptocurrency is admirable; it is an attempt to preserve real purchasing power. With fiat paper money printed by an out-of-control central bank, an increase in the number and denomination of papers (or their electronic equivalents) circulating in the economy will not increase material prosperity or the abundance of real goods; it will only raise the prices of goods in terms of fiat-money quantities. Ludwig von Mises, in his 1912 Theory of Money and Credit, outlined the redistributive effects  of inflation; those who get the new money first (typically politically connected cronies and the institutions they control) will gain in real purchasing power, while those to whom the new money spreads last will lose. Cryptocurrencies are independent of any central issuer (although different organizations administer the technical protocols of each cryptocurrency) and so are not vulnerable to such redistributive inflationary pressures induced by political considerations. This is the principal advantage of cryptocurrencies over any fiat currency issued by a governmental or quasi-governmental central bank. Moreover, the real expenditure of resources (computer hardware and electricity) for mining cryptocurrencies provides a built-in scarcity that further restricts the possibility of inflation.

Yet there is another element to consider. Virtually any major cryptocurrency can be exchanged freely for any other (with some inevitable but minor transaction costs and spreads) as well as for national fiat currencies (with higher transaction costs in both time and money). For instance, on January 12, 2014, one Bitcoin could trade for approximately $850, while one Litecoin could trade for approximately $25, implying an exchange rate of 34 Litecoins per Bitcoin. Due to the similarity in the technical specifications of each cryptocurrency (similar algorithms, similar built-in scarcity, ability to be mined by the same computer hardware, and similar decentralized, distributed generation), any cryptocurrency could theoretically serve an identical function to any other. (The one caveat to this principle is that any future cryptocurrency algorithm that offers increased security from theft could crowd out the others if enough market participants come to recognize it as offering more reliable protection against hackers and fraudsters than the current Bitcoin algorithm and Bitcoin-oriented services do.)  Moreover, any individual or organization with sufficient resources and determination could initiate a new cryptocurrency, much as Billy Markus initiated Dogecoin in part with the intent to provide an amusing reaction to the Bitcoin price crash in early December 2013.

This free entry into the cryptocurrency-creation market, combined with the essential similarity of all cryptocurrencies to date and the ability to readily exchange any one for any other, suggests that we should not be considering the purchasing power of Bitcoin in isolation. Rather, we should view all cryptocurrencies combined as a single pool of wealth. The total purchasing power of this pool of cryptocurrencies in general would depend on a multitude of real factors, including the demand among the general public for an alternative to governmental fiat currencies and the ease with which cryptocurrencies facilitate otherwise cumbersome or infeasible financial transactions. In other words, the properties of cryptocurrencies as stores of value and media of exchange would ultimately determine how much they could purchase, and the activities of arbitrageurs among the cryptocurrencies would tend to produce exchange rates that mirror the relative volumes of each cryptocurrency in existence. For instance, if we make the simplifying assumption that the functional properties of Bitcoin and Litecoin are identical for the practical purposes of users, then the exchange rate between Bitcoins and Litecoins should asymptotically approach 1 Bitcoin to 4 Litecoins, since this will be the ultimate ratio of the number of units of these cryptocurrencies. Of course, at any given time, the true ratio will vary, because each cryptocurrency was initiated at a different time, each has a different amount of computer hardware devoted to mining it, and none has come close to approaching its asymptotic volume.

 What implication does this insight have for the purchasing power of Bitcoin? In a world of many cryptocurrencies and the possibility of the creation of new cryptocurrencies, a single Bitcoin will purchase less than it could have purchased in a world where Bitcoin was the only possible cryptocurrency.  The degree of this effect depends on how many cryptocurrencies are in existence. This, in turn, depends on how many new cryptocurrency models or creative tweaks to existing cryptocurrency models are originated – since it is reasonable to posit that users will have little motive to switch from a more established cryptocurrency to a completely identical but less established cryptocurrency, all other things being equal. If new cryptocurrencies are originated with greater rapidity than the increase in the real purchasing power of cryptocurrencies in total, inflation may become a problem in the cryptocurrency world. The real bulwark against cryptocurrency inflation, then, is not the theoretical upper limit on any particular cryptocurrency’s volume, but rather the practical limitations on the amount of hardware that can be devoted to mining all cryptocurrencies combined. Will the scarcity of mining effort, in spite of future exponential advances in computer processing power in accordance with Moore’s Law, sufficiently restrain the inflationary pressures arising from human creativity in the cryptocurrency arena? Only time will tell.

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

Maintaining the Operational Continuity of Replicated Neurons – Article by Franco Cortese

The New Renaissance Hat
Franco Cortese
June 3, 2013
******************************
This essay is the tenth chapter in Franco Cortese’s forthcoming e-book, I Shall Not Go Quietly Into That Good Night!: My Quest to Cure Death, published by the Center for Transhumanity. The first nine chapters were previously published on The Rational Argumentator under the following titles:
***

Operational Continuity

One of the reasons for continuing conceptual development of the physical-functionalist NRU (neuron-replication-unit) approach, despite the perceived advantages of the informational-functionalist approach, was in the event that computational emulation would either fail to successfully replicate a given physical process (thus a functional-modality concern) or fail to successfully maintain subjective-continuity (thus an operational-modality concern), most likely due to a difference in the physical operation of possible computational substrates compared to the physical operation of the brain (see Chapter 2). In regard to functionality, we might fail to computationally replicate (whether in simulation or emulation) a relevant physical process for reasons other than vitalism. We could fail to understand the underlying principles governing it, or we might understand its underlying principles so as to predictively model it yet still fail to understand how it affects the other processes occurring in the neuron—for instance if we used different modeling techniques or general model types to model each component, effectively being able to predictively model each individually while being unable to model how they affect eachother due to model untranslatability. Neither of these cases precludes the aspect in question from being completely material, and thus completely potentially explicable using the normative techniques we use to predictively model the universe. The physical-functionalist approach attempted to solve these potential problems through several NRU sub-classes, some of which kept certain biological features and functionally replaced certain others, and others that kept alternate biological features and likewise functionally replicated alternate biological features. These can be considered as varieties of biological-nonbiological NRU hybrids that functionally integrate those biological features into their own, predominantly non-biological operation, as they exist in the biological nervous system, which we failed to functionally or operationally replicate successfully.

The subjective-continuity problem, however, is not concerned with whether something can be functionally replicated but with whether it can be functionally replicated while still retaining subjective-continuity throughout the procedure.

This category of possible basis for subjective-continuity has stark similarities to the possible problematic aspects (i.e., operational discontinuity) of current computational paradigms and substrates discussed in Chapter 2. In that case it was postulated that discontinuity occurred as a result of taking something normally operationally continuous and making it discontinuous: namely, (a) the fact that current computational paradigms are serial (whereas the brain has massive parallelism), which may cause components to only be instantiated one at a time, and (b) the fact that the resting membrane potential of biological neurons makes them procedurally continuous—that is, when in a resting or inoperative state they are still both on and undergoing minor fluctuations—whereas normative logic gates both do not produce a steady voltage when in an inoperative state (thus being procedurally discontinuous) and do not undergo minor fluctuations within such a steady-state voltage (or, more generally, a continuous signal) while in an inoperative state. I had a similar fear in regard to some mathematical and computational models as I understood them in 2009: what if we were taking what was a continuous process in its biological environment, and—by using multiple elements or procedural (e.g., computational, algorithmic) steps to replicate what would have been one element or procedural step in the original—effectively making it discontinuous by introducing additional intermediate steps? Or would we simply be introducing a number of continuous steps—that is, if each element or procedural step were operationally continuous in the same way that the components of a neuron are, would it then preserve operational continuity nonetheless?

This led to my attempting to develop a modeling approach aiming to retain the same operational continuity as exists in biological neurons, which I will call the relationally isomorphic mathematical model. The biophysical processes comprising an existing neuron are what implements computation; by using biophysical-mathematical models as our modeling approach, we might be introducing an element of discontinuity by mathematically modeling the physical processes giving rise to a computation/calculation, rather than modeling the computation/calculation directly. It might be the difference between modeling a given program, and the physical processes comprising the logic elements giving rise to the program. Thus, my novel approach during this period was to explore ways to model this directly.

Rather than using a host of mathematical operations to model the physical components that themselves give rise to a different type of mathematics, we instead use a modeling approach that maintains a 1-to-1 element or procedural-step correspondence with the level-of-scale that embodies the salient (i.e., aimed-for) computation. My attempts at developing this produced the following approach, though I lack the pure mathematical and computer-science background to judge its true accuracy or utility. The components, their properties, and the inputs used for a given model (at whatever scale) are substituted by numerical values, the magnitude of which preserves the relationships (e.g., ratio relationships) between components/properties and inputs, and by mathematical operations which preserve the relationships exhibited by their interaction. For instance: if the interaction between a given component/property and a given input produces an emergent inhibitory effect biologically, then one would combine them to get their difference or their factors, respectively, depending on whether they exemplify a linear or nonlinear relationship. If the component/property and the input combine to produce emergently excitatory effects biologically, one would combine them to get their sum or products, respectively, depending on whether they increased excitation in a linear or nonlinear manner.

In an example from my notes, I tried to formulate how a chemical synapse could be modeled in this way. Neurotransmitters are given analog values such as positive or negative numbers, the sign of which (i.e., positive or negative) depends on whether it is excitatory or inhibitory and the magnitude of which depends on how much more excitatory/inhibitory it is than other neurotransmitters, all in reference to a baseline value (perhaps 0 if neutral or neither excitatory nor inhibitory; however, we may need to make this a negative value, considering that the neuron’s resting membrane-potential is electrically negative, and not electrochemically neutral). If they are neurotransmitter clusters, then one value would represent the neurotransmitter and another value its quantity, the sum or product of which represents the cluster. If the neurotransmitter clusters consist of multiple neurotransmitters, then two values (i.e., type and quantity) would be used for each, and the product of all values represents the cluster. Each summative-product value is given a second vector value separate from its state-value, representing its direction and speed in the 3D space of the synaptic junction. Thus by summing the products of all, the numerical value should contain the relational operations each value corresponds to, and the interactions and relationships represented by the first- and second-order products. The key lies in determining whether the relationship between two elements (e.g., two neurotransmitters) is linear (in which case they are summed), or nonlinear (in which case they are combined to produce a product), and whether it is a positive or negative relationship—in which case their factor, rather than their difference, or their product, rather than their sum, would be used. Combining the vector products would take into account how each cluster’s speed and position affects the end result, thus effectively emulating the process of diffusion across the synaptic junction. The model’s past states (which might need to be included in such a modeling methodology to account for synaptic plasticity—e.g., long-term potentiation and long-term modulation) would hypothetically be incorporated into the model via a temporal-vector value, wherein a third value (position along a temporal or “functional”/”operational” axis) is used when combining the values into a final summative product. This is similar to such modeling techniques as phase-space, which is a quantitative technique for modeling a given system’s “system-vector-states” or the functional/operational states it has the potential to possess.

How excitatory or inhibitory a given neurotransmitter is may depend upon other neurotransmitters already present in the synaptic junction; thus if the relationship between one neurotransmitter and another is not the same as that first neurotransmitter and an arbitrary third, then one cannot use static numerical values for them because the sequence in which they were released would affect how cumulatively excitatory or inhibitory a given synaptic transmission is.

A hypothetically possible case of this would be if one type of neurotransmitter can bond or react with two or more types of neurotransmitter. Let’s say that it’s more likely to bond or react with one than with the other. If the chemically less attractive (or reactive) one were released first, it would bond anyways due to the absence of the comparatively more chemically attractive one, such that if the more attractive one were released thereafter, then it wouldn’t bond because the original one would have already bonded with the chemically less attractive one.

If a given neurotransmitter’s numerical value or weighting is determined by its relation to other neurotransmitters (i.e., if one is excitatory, and another is twice as excitatory, then if the first was 1.5, the second would be 3—assuming a linear relationship), and a given neurotransmitter does prove to have a different relationship to one neurotransmitter than it does another, then we cannot use a single value for it. Thus we might not be able to configure it such that the normative mathematical operations follow naturally from each other; instead, we may have to computationally model (via the [hypothetically] subjectively discontinuous method that incurs additional procedural steps) which mathematical operations to perform, and then perform them continuously without having to stop and compute what comes next, so as to preserve subjective-continuity.

We could also run the subjectively discontinuous model at a faster speed to account for its higher quantity of steps/operations and the need to keep up with the relationally isomorphic mathematical model, which possesses comparatively fewer procedural steps. Thus subjective-continuity could hypothetically be achieved (given the validity of the present postulated basis for subjective-continuity—operational continuity) via this method of intermittent external intervention, even if we need extra computational steps to replicate the single informational transformations and signal-combinations of the relationally isomorphic mathematical model.

Franco Cortese is an editor for Transhumanity.net, as well as one of its most frequent contributors.  He has also published articles and essays on Immortal Life and The Rational Argumentator. He contributed 4 essays and 7 debate responses to the digital anthology Human Destiny is to Eliminate Death: Essays, Rants and Arguments About Immortality.

Franco is an Advisor for Lifeboat Foundation (on its Futurists Board and its Life Extension Board) and contributes regularly to its blog.

Frontier-Making Private Initiatives: Examples from History – Article by G. Stolyarov II

Frontier-Making Private Initiatives: Examples from History – Article by G. Stolyarov II

The New Renaissance Hat
G. Stolyarov II
July 8, 2012
******************************

In my video “SpaceX, Neil deGrasse Tyson, & Private vs. Government Technological Breakthroughs”, I provided a brief discussion of notable counterexamples to Neil deGrasse Tyson’s assertions that private enterprise does not have the resources or exploratory orientation to open up radically new frontiers. Tyson argues that only well-funded government efforts can open up space or could have opened up the New World. History, however, offers many examples of precisely what Tyson denies to be possible: private enterprise breaking new ground and making the exploration of a frontier possible. Indeed, in many of these cases, governments only entered the arena later, once private inventors or entrepreneurs have already established an industry in which governments could get involved. Here, I offer a somewhat more thorough list of such examples of groundbreaking and well-known private initiatives, as well as links to further information about each. I may also update this list as additional examples occur to me.

The Industrial Revolution: The Industrial Revolution – the explosion of technologies for mass production during the late 18th and early 19th centuries – itself arose out of private initiative. The extensive Wikipedia entry on the Industrial Revolution shows that virtually every one of the major inventions that made it possible was created by a private individual and put into commercial use by private entrepreneurs. This paradigm shift, more than any other, rescued the majority of humankind from the brink of subsistence and set the stage for the high living standards we enjoy today.

Automobile: The automobile owes its existence to ingenious tinkerers, inventors, and entrepreneurs. The first self-propelled vehicle was invented circa 1769 by Nicolas-Joseph Cugnot. Cugnot did work on experiments for the French military and did receive a pension from King Louis XV for his inventions. However, subsequent developments that made possible the automobile occurred solely due to private initiative. The first internal combustion engines were independently developed circa 1807 by the private inventors Nicéphore Niépce and François Isaac de Rivaz. For the remainder of the 19th century, innovations in automobile technology were carried forward by a succession of tinkerers. The ubiquity and mass availability of the automobile owe their existence to the mass-production techniques pioneered by Henry Ford in the early 20th century.

Great Northern Railway: While some railroads, such as the notorious repeatedly bankrupt Transcontinental Railroad in the United States, received government subsidies, many thriving railroads were fully funded and operated privately. James J. Hill’s Great Northern Railway – which played a pivotal role in the development of the Pacific Northwest – is an excellent example.

Electrification: The infusion of cheap, ubiquitous artificial light into human societies during the late 19th centuries owes its existence largely to the work of two private inventors and entrepreneurs: Nicola Tesla and Thomas Edison.

Computing: The first computers, too, were the products of private tinkering. A precursor, the Jacquard Loom, was developed by Joseph Marie Jacquard in 1801. The concept for the first fully functional computer was developed by Charles Babbage in 1837 – though Babbage did not have the funds to complete his prototype. The Wikipedia entry on the history of computing shows that private individuals contributed overwhelmingly to the theoretical and practical knowledge needed to construct the first fully functioning general-purpose computers in the mid-20th century. To be sure, some of the development took place in government-funded universities or was done for the benefit of the United States military. However, it is undeniable that we have private entrepreneurs and companies to thank for the introduction of computers and software to the general public beginning in the 1970s.

Civilian Internet: While the Internet began as a US military project (ARPANET) in the 1960s, it was not until it was opened to the private market that its effects on the world became truly groundbreaking. An excellent discussion of this development can be found in Peter Klein’s essay, “Government Did Invent the Internet, But the Market Made It Glorious”.

Human Genome Project: While the United States government’s Human Genome Project began first in 1990, it was overtaken by the privately funded genome-sequencing project of J. Craig Venter and his company Celera. Celera started its work on sequencing the human genome in 1998 and completed it in 2001 at approximately a tenth of the cost of the federally funded project. The two projects published their results jointly, but the private project was far speedier and more cost-efficient.

Private Deep-Space Asteroid-Hunting Telescope: This initiative is in the works, but Leonard David of SPACE.com writes that Project Sentinel is expected to be launched in 2016 using SpaceX’s Falcon 9 rocket. This is an unprecedented private undertaking by the B612 Foundation, described by Mr. David as “a nonprofit group of scientists and explorers that has long advocated the exploration of asteroids and better space rock monitoring.” Project Sentinel aims to vastly improve our knowledge of potentially devastating near-Earth asteroids and to map 90% of them within 5.5 years of operation. The awareness conferred by this project might just save humanity itself.

With this illustrious history, private enterprise may yet bring us even greater achievements – from the colonization of Mars to indefinite human life extension. In my estimation, the probability of such an outcome far exceeds that of a national government undertaking such ambitious advancements of our civilization.