Don’t Fear Artificial Intelligence by Ray Kurzweil

This is an article from TIME by Ray Kurzweil called Don’t Fear Artificial Intelligence.  Basically, Kurzweil’s stance is that “technology is a double-edged sword” and that it always has been, but that’s no reason to abandon the research.  Kurzweil also states that, “Virtually every­one’s mental capabilities will be enhanced by it within a decade.”  I hope it makes people smarter and not just more intelligent! 


Don’t Fear Artificial Intelligence

Retro toy robot
Getty Images

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller “How to Create a Mind.”

Two great thinkers see danger in AI. Here’s how to make it safe.

Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar ­Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Consideration of ethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.” The median view of AI practitioners today is that we are still several decades from achieving human-­level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-­machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-­quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller How to Create a Mind.


 

This article can also be found here.
 

 

Ben Goertzel – Beginnings [on Artificial Intelligence – Thanks to Adam A. Ford for this video.]

In this video, Ben Goertzel talks a little about how he got into AGI research and about the research, itself.  I first heard of Ben Goertzel about four years ago, right when I was first studying computer science and considering a career in AI programming.  At the time, I was trying to imagine how you would build an emotionally intelligent machine.  I really enjoyed hearing some of his ideas at the time and still do.  Also at the time, I was listening to a lot of Tony Robbins so you could imagine, I came up with some pretty interesting theories on artificial intelligence and empathetic machines.  Maybe if I get enough requests I’ll write a special post on some of those ideas.  You just let me know if you’re interested.


Runtime: 10:33


This video can also be found at here and here.

Video Info:

Published on Jul 27, 2012

Ben Goertzel talks about his early stages in thinking about AI, and two books : The Hidden Pattern, and Building Better Minds.

The interview was done in Melbourne Australia while Ben was down to speak at the Singularity Summit Australia 2011.

http://2011.singularitysummit.com.au

Interviewed, Filmed & Edited by Adam A. Ford
http://goertzel.org

 

Kevin Warwick Claims Turing Test Passed; Really? C’mon, Kevin…

This is an article from The Telegraph website called ‘Captain Cyborg’: the man behind the controversial Turing Test claims.  In the article, Kevin Warwick (Professor of Cybernetics at The University of Reading, England) claims milestone has been reached in AI; the passing of the Turing test.  Personally, I’m disappointed in Prof. Warwick for making this claim, but read the article and decide for yourself…

‘Captain Cyborg’: the man behind the controversial Turing Test claims

Kevin Warwick, a professor of cybernetics at Reading University who implanted a microchip into his arm, is being scrutinised by scientist over his claims that a computer passed the “Turing Test”Prof Warwick is considered a maverick among the science communityProf Warwick is considered a maverick among the science community Photo: Rex Features

Kevin Warwick, a professor of cybernetics at Reading University, called his recent experiment in which a computer fooled humans in the Turing Test an “important landmark”, but scientific opposition is gathering.

Prof Warwick made headlines when the university claimed the 65-year old Turing Test was passed for the first time by a “supercomputer” called Eugene Goostman at an event organised by Prof Warwick at the Royal Society in London.

Ten out of thirty human judges believed they were speaking to a real teenage boy during a five minute period, so the experiment was hailed as a victory.

However, other experts said the announcement trivialised “serious” AI (Artificial Intelligence) research, and fooled people into believing that the world of science fiction could soon become science fact.

Prof Warwick is considered a maverick among the science community. He first had a microchip implanted in his arm that triggered a greeting from computers each day when he arrived at work.

The scientist later implanted sensors and a microchip into the nerves in his arm, similar to an implant he also gave to his wife, so that when someone grasped her hand Prof Warwick was able to experience the same sensation in his.

He claimed it was a form of telepathy as it allowed his nerves to feel what she was feeling over the internet, but the work was controversial among other scientists as they doubted whether his experiments were much more than entertainment.

The latest announcement that the Turing Test has been passed for the very first time has been met with yet more scepticism.

Prof Warwick said: ”In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test.

“This milestone will go down in history as one of the most exciting.”

However, Professor Murray Shanahan, a professor of cognitive robotics at Imperial College London, said there were “a lot of problems” with the claims.

The scientist said that as Eugene was described to judges as a 13-year-old boy from Ukraine who learned English as a second language, some of the bizarre responses to questions could be explained away.

He said the five-minute conversation benchmark was “taken out of context” from the Turing Test, and fell well short of a true experiement for Artificial Intelligence, which should last for “hours, if not days”.

He also said the 30-strong judging panel, which included Robert Llewellyn, the Red Dwarf actor, was not big enough to support the claim.

Prof Shanahan told the Telegraph: “I think there are a lot of problems with the claims and I do not believe the Turing Test has been passed.

“I think the claim is completely misplaced, and it devalues real AI research. It makes it seem like science fiction AI is nearly here, when in fact it’s not and it’s incredibly difficult.”

Prof Shanahan added that the “supercomputer” was in fact a chatbot, a computer programme, rather than a powerful machine.

Gary Marcus, a professor of cognitive science at New York University, said in an article for the New Yorker: “Here’s what Eugene Goostman isn’t: a supercomputer.

“It is not a ground-breaking, super-fast piece of innovative hardware but simply a cleverly-coded piece of software.”

Prof Warwick told the Telegraph: “I think they’re just pointing fingers. It’s a particular aspect of Artificial Intelligence research. It’s an iconic test, it’s controversial, as we can see.

“I don’t think it devalues other Artificial Intelligence. If anything, I would say if it excites a few children, then I think it’s a good thing.”

 

The original article can be found here.

Transhumanism and Money by Zeev Kirsch

Other than the Star Wars/Star Trek mixup at the beginning, this was a pretty good read about transhumanity and money and by Zeev Kirsch at Transhumanity.net.  (Just messing with you, Zeev.  I know it was just a typo.)

Transhumanism and Money

Money is at the very center of how human beings communicate with one another in complex societies and yet it is almost completely ignored in all private k-12 education in the united states and most nations. Money isn’t economics, Money is human behavior; it is social and individual psychology. Particularly now, as the world body of nations and central banks escalate currency wars(and real wars), more people are turning their attention to money. As a long time reader of futurism, science fiction, and for the last decade, transhumanist literature, I’ve wondered why these genres have all generally ignored money as a question. If future technological development inevitably depends on the productivity of complex societies comprised of many individuals operating at arms length, then why do transhumanists and futurists ignore money? This is true even in popular culture of futurism. In Star Wars First Contact, Captain Picard travels back in time and must explain to a compatriot that ‘in the future’ there is no money because society does not need it any more. The future of money needs more than a pop culture non-explanation. Practical futurism which seeks to actually create the future, instead of hope and change for it, must embrace all horizons of where our present transitions towards our future. ALL transhumanist visions require complex human coordination to be achieved and thus, going forward from here to utopia(s), they require (m)oneys. Therefore, deliberate ignorance of the money question threatens to retard transhumanist progress from actualization. Tim Collins, a noteable mind in the transhumanist movement presents his own views on practical futurism in what he calls the ‘grinder way’, I applaud him for his deep thinking and subsequent Action on the subject when it comes to human device augmentation. I’m certain he would extend the philosophy of the grinders way to include a renewed transhumanist focus upon the money question.

Let us begin at the beginning, the very beginning of humanity;  Primates. Research on the social behaviors  and psychology of primates has been escalating in the past decade. Chimpanzees, it turns out, don’t like to share, but they will share with fellow chipanzees under certain circumstances. In one particular repeated observation in the wild, they will trade food for sex. This means the male must obtain the food , transport the food before it spoils and then tender the food to a female in expectation for a sexual encounter. Trading bananas for sex isn’t money of course, but it’s a primitive form of prostitution transaction distinguished from the other more prevalent chimpanzee sexual relationships lacking the food-trading component. This clearly doesn’t tell us anything about Money in human society but it does tell us that human behavior underlying the creation of money far precedes the evolution of homo-sapien. The chimps are bartering for sex, and bartering is one of the behaviors that underlies the early creation of money. Barter encapsulates the use value of money.

Millions of moons later, at some point in the evolution of Mankind between chimposapiens and homosapiens, Primitive Mankind began squirreling away consumables that did not rot as quickly. From this we can assume Mankind  began exchanging objects and services not only for their instant utility, but for their future utility, as a set of long term promises and expectations. This encapsulates the savings value of money. You can save something for its use value for a later date. Less than ten thousand years ago man began working advanced stone tools, metals, the sledge and roller turned into the wheel, and yes, one day (m)oneys arrived in many many forms and evolved alongside the societies that were creating and advancing their use.  It is a long long story that we will never truly know, but we know that at various points things like shells, goats, shaped stones, even human beings, were used for trade, but that eventually coining precious metals became the most popular money substrate within a few thousand years ago. Since then the rise of paper notes has taken over as the predominant substrate of (m)oneys in the world. Money has evolved through various (m)onies.

An essential jump in the modern era of money, is that modern forms of (m)oney whether metal or paper have abandoned their ‘use’ value and and transitioned to become valued exclusively as a medium of exchange (hence the saying that ‘you cannot eat gold’). While precious metals generally don’t oxidize or burn easily , paper is more vulnerable and more easy to replace ( creating widespread counterfeiting problems relative to counterfeiting of coined metal). Further along the line of money history these specialized forms of money started to be lent out to other people in return for sets of promises, or sometimes for something called ‘interest’ which was an expectation of more Money in return than the amount lend out. Thus was born the ‘time value’ of money which helped precipitate growth of massive networks of promises and expectations themselves having come to define the modern world. The lending value of money turned humanity away from the fight against a history of rotting warehoused forms of money, away from a history of heavy and difficult physical coinage for transport, slowly toward simpler and cheaper methods of structuring promises and expectations. This transition could not have happened without the growth of predictable and stable institutions which have come to define complex societies.

How do we understand Money Now.

Now that we know the history of money, what is the present of money ?I am not here to describe what is going on in current events, nor what the various intellectual giants of Money would explain to you about how various forms of credit comprise or don’t comprise different tiers of our modern Money system.  There are literally thousands of articles a day you can read about that. However, after years of trying to understand what Money is based on studying its past, I would like to offer my own personal definition of money, which is best suited to our present understanding. I believe that ‘Money’ comprises the fungible parts of the dynamic network of promises and expectations  held between individuals and groups in a society;

[Fungible] because there are many commitments , promises and expectations in society that are not fungible. Some of those promises and expectations are interpersonal and even ideological in nature. For example, Some are commitments and expectations based on Loving relationships, or strict Hierarchical positions in secular and non-secular institutions that cannot be exchanged in a more or less fungible manner. Interestingly, the definition of which relationships and objects are fungible changes with the values of societies and individuals themselves. Money is thus intimately connected with our personal and social value systems.

[Dynamic] because promises and expectations are not discrete platonic quantities to be metered out in units, but fuzzy neurological outputs based on our common understanding of the persons’ and groups’ behaviors and communications.

[Parts] because many things, in addition to legal tender, serve as a money in any given society. The aggregate of all these (m)oneys simultaneously represent all nodes on the infinite network of promises and expectaions comprising Money in society.  Anything people find highly liquid for the purposes of trading goods and services can  and does function as a (m)oney in our society. For example, In the the American Neogulag , cigarettes candy and bagged instant coffee serve as a money for millions of people. Yet as we all know, the major component of Money in the u.s. is the legal tender currency titled the federal reserve note, or colloquially referred to as the Dollar.

The future of (m)oney and Money.

What stops people from abandoning a (m)oney and what stops people from abandoning Money altogether? When you consider the notion of ‘abandoning’ money, it generally represents currency collapse; the failure of a (m)oney to be used for the purposes of exchange lending and savings. A (m)oney may fail slowly through time, or all once. As observed in modern monetary history, the typical path of a  modern (m)oney [ money now almost always being centrally issued in the form of ‘Notes’ ] is that its failure begins in a predictable manner, slowly accelerating up a curve until a convexity point is reached where other (m)oneys or no (m)oney at all become more popular than utilizing the failing (m)oney. This tipping point is reached when a discrete change occurs to the willingness of various institutions and persons willingness to lend money at interest to one another (time value of money) to possess the money over time (savings value of money) and to use that money for payments and sales of services and products and investments ( exchange value of the money)—-All in that order. For example, if people started using currencies other than the Euro, the Euro would be abandoned in favor of other currencies and eventually be out of use, it’s value destroyed. On it’s way there, people would stop lending to each other in loans denominated in Euros, People would dump their savings of reserve Euros and last but not least people would finally stop exchange Euros altogether.  This is how any number of currencies around the world have failed multiple times over the past decades. Luckily , we in the west, believe our system to be far away from any tipping points. But not everyone who is looking towards the future agrees with this outlook. I am not going to give prognostications about the future of the dollar. Needless to say, if you knew what would happen to the dollar, you wouldn’t be telling people about it.

Big ‘M’ Money however, is another story altogether. the question of Money goes beyond any one (m)oney, let alone the dollar. How is it that a transhumanist or other futurist could conceive of a star trek future where a collectivist society of enlightened humans stopped needing to use tokens to represent a network of trusts and promises. In such a society, how would individual desires be expressed in the collective frame work. If I wanted to eat ALL the apples. what would stop me? When would the collective apple limit be reached for me as opposed to for my best friend who is allergic to apples. Clearly any organized network of human beings must have a rule system. Rules implicate allowances, limits, credits, or whatever you would call them. The more collectivized a society is the more the network of promises and expectations between individuals and groups needs to be mediated by the uber-collective, what we normally call government. Would a system of ‘credits, limits, or allowances’ registered as digital entries be  anything other than a digital form of centrally planned money (which appears to be happening in sweden) ? What other system could there be?

I do not think society can operate without a fungible dynamic set of promises and expectations we understand to be MONEY. Wether or not those promises and expectations can be traded more freely by individuals, or are more carefully ordered by the governing systems of that society is another question. So , what other systems could be out there? Bitcoin is selling itself as a very powerful tool for avoiding government control over money, by expediting digital exchange. Many transhumanists and futurists seem very quick to take up the bit-coin mantle. Prescious metals, while far more secure in their non-digital existence, are far more difficult to coin and trade with (especially as compared with distances allowed by digital internet). And yet, many anti-futurists believe they are better off trading their not so precious dollars for precious metals is wiser than trading them for digital registries in a relatively New digital system that relies upon telecommunications networks for maintaining , if not at least, expanding its value. The question people are asking about the future of Money is what will happen to the most popular (m)oneys out there such as the dollar, euro, yen and yuan. I”m not sure. But the increasing popularity of precious metals and alternatives like bit coin (not to mention all sorts of trading syndicates, some even using the internet) are a sure sign that people with excess savings are looking to get out of those currencies.

My question for the Transhumanist community is what do you think about the future of Money and money? Over the years, I’ve perceived that Transhumanism is splitting into two camps which would provide separate perspectives on this question. One camp embraces futurism as necessarily collectivist at the highest level. The other camp embraces a future more focussed on pockets of individualism relying upon deep comittment to technophilia and individualist interest in science; call this the Individualist camp of Transhumanism. They are the camp more attuned to the dangers of central planning and tyrannical collectivist decision paradigms—-(facism, communism, whatever…). I am positing the classic juxtaposition of the orwellian versus the huxleyian fears for the collective. It seems to me Transhumanists trending towards the individualist camp would emphasize the importance of developing robust Money systems where the Collectivists would emphasize the Overall Strength of the entire network of promises and expectations. The former fearing the Transhuman social aggregate will suffer excessively under capricious powerful collectives ( namely government and central banks) , and the latter submitting their faith in collective leadership will provide a network of promises and expectations in the overall best interests of the Transhuman social aggregate.

The Collectivist and Individualist Camps are not entirely mutually exclusive, and like a ying/yang seem to define each other in a relativistic sense. However, both camps seem to be taking note of bit coins recent success. I’ve learned a lot about bit coin, and while there have been many interesting developments with it as of late, I think the transhumanist community is overlooking the actions of a nation in the world that many consider ‘the future’; China. China is buying gold coins not bit coins. So please tranhumanists of both the Collectivist and Individualist persuasion, or both or neither, I am asking you to  help me reconcile why one of the most forward thinking, futurist, and seemingly Collectivist nations on earth has been busy hoarding gold for a number of years. I am not asking you to ignore bit coin or embrace gold, I am simply asking you for a little more help and to pay a little more attention to the great money question.

[ [Disclosure] I am not endorsing or dismissing bit coin. I do not use bit coin nor have I ever used it].  

Zeev Kirsch has also predicted, at the Long Now, the following scenario:

“By the end of Obama’s second term as President, The Central Bank of China will publicly announce that they have an amount of gold in reserve that is greater than Germany’s.”

the link to that “LongBet” is HERE

hero image from here: http://www.silvercoinstoday.com/ufwc-releases-prototype-eco-coin-in-ecosilver/103789/

 

This article can also be found at http://transhumanity.net/transhumanism-and-money/

National Intelligence Council Predicts a “Very Transhuman Future by 2030”

U.S. government agency –  the National Intelligence Council (NIC) – released “a 140-page document that outlines major trends and technological developments we should expect in the next 20 years.”  The entire 140 page document can be read or downloaded at http://www.dni.gov/files/documents/GlobalTrends_2030.pdf.

U.S. spy agency predicts a very transhuman future by 2030

U.S. spy agency predicts a very transhuman future by 2030

The National Intelligence Council has just released its much anticipated forecasting report, a 140-page document that outlines major trends and technological developments we should expect in the next 20 years. Among their many predictions, the NIC foresees the end of U.S. global dominance, the rising power of individuals against states, a growing middle class that will increasingly challenge governments, and ongoing shortages in water, food and energy. But they also envision a future in which humans have been significantly modified by their technologies — what will herald the dawn of the transhuman era.

This work brings to mind the National Science Foundation’s groundbreaking 2003 report,Converging Technologies for Improving Human Performance — a relatively early attempt to understand and predict how advanced biotechnologies would impact on the human experience. The NIC’s new report, Global Trends 2030: Alternative Worlds, follows in the same tradition — namely one that doesn’t ignore the potential for enhancement technologies.

U.S. spy agency predicts a very transhuman future by 20301

In the new report, the NIC describes how implants, prosthetics, and powered exoskeletons will become regular fixtures of human life — what could result in substantial improvements to innate human capacities. By 2030, the authors predict, prosthetics should reach the point where they’re just as good — or even better — than organic limbs. By this stage, the military will increasingly rely on exoskeletons to help soldiers carry heavy loads. Servicemen will also be adminstered psychostimulants to help them remain active for longer periods.

Many of these same technologies will also be used by the elderly, both as a way to maintain more youthful levels of strength and energy, and as a part of their life extension strategies.

Brain implants will also allow for advanced neural interface devices — what will bridge the gap between minds and machines. These technologies will allow for brain-controlled prosthetics, some of which may be able to provide “superhuman” abilities like enhanced strength, speed — and completely new functionality altogether.

Other mods will include retinal eye implants to enable night vision and other previously inaccessible light spectrums. Advanced neuropharmaceuticals will allow for vastly improved working memory, attention, and speed of thought.

“Augmented reality systems can provide enhanced experiences of real-world situations,” the report notes, “Combined with advances in robotics, avatars could provide feedback in the form of sensors providing touch and smell as well as aural and visual information to the operator.”

But as the report notes, many of these technologies will only be available to those who are able to afford them. The authors warn that it could result in a two-tiered society comprising enhanced and nonenhanced persons, a dynamic that would likely require government oversight and regulation.

Smartly, the report also cautions that these technologies will need to be secure. Developers will be increasingly challenged to prevent hackers from interfering with these devices.

Lastly, other technologies and scientific disciplines will have to keep pace to make much of this work. For example, longer-lasting batteries will improve the practicality of exoskeletons. Progress in the neurosciences will be critical for the development of future brain-machine interfaces. And advances in flexible biocompatible electronics will enable improved integration with cybernetic implants.

The entire report can be read here.

Image: Bruce Rolff/shutterstock.

This article can also be found on io9 at http://io9.com/5967896/us-spy-agency-predicts-a-very-transhuman-future-by-2030

Transhumanism, medical technology and slippery slopes from the NCBI

This article (Transhumanism, medical technology and slippery slopes from the NCBI) explores transhumanism in the medical industry.  I thought it was bit negatively biased, but the sources are good and disagreement doesn’t equate to invalidation in my book so here it is…

Abstract

In this article, transhumanism is considered to be a quasi‐medical ideology that seeks to promote a variety of therapeutic and human‐enhancing aims. Moderate conceptions are distinguished from strong conceptions of transhumanism and the strong conceptions were found to be more problematic than the moderate ones. A particular critique of Boström’s defence of transhumanism is presented. Various forms of slippery slope arguments that may be used for and against transhumanism are discussed and one particular criticism, moral arbitrariness, that undermines both weak and strong transhumanism is highlighted.

No less a figure than Francis Fukuyama1 recently labelled transhumanism as “the world’s most dangerous idea”. Such an eye‐catching condemnation almost certainly denotes an issue worthy of serious consideration, especially given the centrality of biomedical technology to its aims. In this article, we consider transhumanism as an ideology that seeks to evangelise its human‐enhancing aims. Given that transhumanism covers a broad range of ideas, we distinguish moderate conceptions from strong ones and find the strong conceptions more problematic than the moderate ones. We also offer a critique of Boström’s2 position published in this journal. We discuss various forms of slippery slope arguments that may be used for and against transhumanism and highlight one particular criticism, moral arbitrariness, which undermines both forms of transhumanism.

What is transhumanism?

At the beginning of the 21st century, we find ourselves in strange times; facts and fantasy find their way together in ethics, medicine and philosophy journals and websites.2,3,4 Key sites of contestation include the very idea of human nature, the place of embodiment within medical ethics and, more specifically, the systematic reflections on the place of medical and other technologies in conceptions of the good life. A reflection of this situation is captured by Dyens5 who writes,

What we are witnessing today is the very convergence of environments, systems, bodies, and ontology toward and into the intelligent matter. We can no longer speak of the human condition or even of the posthuman condition. We must now refer to the intelligent condition.

We wish to evaluate the contents of such dialogue and to discuss, if not the death of human nature, then at least its dislocation and derogation in the thinkers who label themselves transhumanists.

One difficulty for critics of transhumanism is that a wide range of views fall under its label.6 Not merely are there idiosyncrasies of individual academics, but there does not seem to exist an absolutely agreed on definition of transhumanism. One can find not only substantial differences between key authors2,3,4,7,8 and the disparate disciplinary nuances of their exhortations, but also subtle variations of its chief representatives in the offerings of people. It is to be expected that any ideology transforms over time and not least of all in response to internal and external criticism. Yet, the transhumanism critic faces a further problem of identifying a robust target that stays still sufficiently long to locate it properly in these web‐driven days without constructing a “straw man” to knock over with the slightest philosophical breeze. For the purposes of targeting a sufficiently substantial target, we identify the writings of one of its clearest and intellectually robust proponents, the Oxford philosopher and cofounder of the World Transhumanist Association , Nick Boström,2 who has written recently in these pages of transhumanism’s desire to make good the “half‐baked” project3 that is human nature.

Before specifically evaluating Boström’s position, it is best first to offer a global definition for transhumanism and then to locate it among the range of views that fall under the heading. One of the most celebrated advocates of transhumanism is Max More, whose website reads “no more gods, nor more faith, no more timid holding back. The future belongs to posthumanity”.8 We will have a clearer idea then of the kinds of position transhumanism stands in direct opposition to. Specifically, More8 asserts,

“Transhumanism” is a blanket term given to the school of thought that refuses to accept traditional human limitations such as death, disease and other biological frailties. Transhumans are typically interested in a variety of futurist topics, including space migration, mind uploading and cryonic suspension. Transhumans are also extremely interested in more immediate subjects such as bio‐ and nano‐technology, computers and neurology. Transhumans deplore the standard paradigms that attempt to render our world comfortable at the sake of human fulfilment.8

Strong transhumanism advocates see themselves engaged in a project, the purpose of which is to overcome the limits of human nature. Whether this is the foundational claim, or merely the central claim, is not clear. These limitations—one may describe them simply as features of human nature, as the idea of labelling them as limitations is itself to take up a negative stance towards them—concern appearance, human sensory capacities, intelligence, lifespan and vulnerability to harm. According to the extreme transhumanism programme, technology can be used to vastly enhance a person’s intelligence; to tailor their appearance to what they desire; to lengthen their lifespan, perhaps to immortality; and to reduce vastly their vulnerability to harm. This can be done by exploitation of various kinds of technology, including genetic engineering, cybernetics, computation and nanotechnology. Whether technology will continue to progress sufficiently, and sufficiently predictably, is of course quite another matter.

Advocates of transhumanism argue that recruitment or deployment of these various types of technology can produce people who are intelligent and immortal, but who are not members of the species Homo sapiens. Their species type will be ambiguous—for example, if they are cyborgs (part human, part machine)—or, if they are wholly machines, they will lack any common genetic features with human beings. A legion of labels covers this possibility; we find in Dyen’s5 recently translated book a variety of cultural bodies, perhaps the most extreme being cyberpunks:

…a profound misalignment between existence and its manifestation. This misalignment produces bodies so transformed, so dissociated, and so asynchronized, that their only outcome is gross mutation. Cyberpunk bodies are horrible, strange and mysterious (think of Alien, Robocop, Terminator, etc.), for they have no real attachment to any biological structure. (p 75)

Perhaps a reasonable claim is encapsulated in the idea that such entities will be posthuman. The extent to which posthuman might be synonymous with transhumanism is not clear. Extreme transhumanists strongly support such developments.

At the other end of transhumanism is a much less radical project, which is simply the project to use technology to enhance human characteristics—for example, beauty, lifespan and resistance to disease. In this less extreme project, there is no necessary aspiration to shed human nature or human genetic constitution, just to augment it with technology where possible and where desired by the person.

Who is for transhumanism?

At present it seems to be a movement based mostly in North America, although there are some adherents from the UK. Among its most intellectually sophisticated proponents is Nick Boström. Perhaps the most outspoken supporters of transhumanism are people who see it simply as an issue of free choice. It may simply be the case that moderate transhumanists are libertarians at the core. In that case, transhumanism merely supplies an overt technological dimension to libertarianism. If certain technological developments are possible, which they as competent choosers desire, then they should not be prevented from acquiring the technologically driven enhancements they desire. One obvious line of criticism here may be in relation to the inequality that necessarily arises with respect to scarce goods and services distributed by market mechanisms.9 We will elaborate this point in the Transhumanism and slippery slopes section.

So, one group of people for the transhumanism project sees it simply as a way of improving their own life by their own standards of what counts as an improvement. For example, they may choose to purchase an intervention, which will make them more intelligent or even extend their life by 200 years. (Of course it is not self‐evident that everyone would regard this as an improvement.) A less vociferous group sees the transhumanism project as not so much bound to the expansion of autonomy (notwithstanding our criticism that will necessarily be effected only in the sphere of economic consumer choice) as one that has the potential to improve the quality of life for humans in general. For this group, the relationship between transhumanism and the general good is what makes transhumanism worthy of support. For the other group, the worth of transhumanism is in its connection with their own conception of what is good for them, with the extension of their personal life choices.

What can be said in its favour?

Of the many points for transhumanism, we note three. Firstly, transhumanism seems to facilitate two aims that have commanded much support. The use of technology to improve humans is something we pretty much take for granted. Much good has been achieved with low‐level technology in the promotion of public health. The construction of sewage systems, clean water supplies, etc, is all work to facilitate this aim and is surely good work, work which aims at, and in this case achieves, a good. Moreover, a large portion of the modern biomedical enterprise is another example of a project that aims at generating this good too.

Secondly, proponents of transhumanism say it presents an opportunity to plan the future development of human beings, the species Homo sapiens. Instead of this being left to the evolutionary process and its exploitation of random mutations, transhumanism presents a hitherto unavailable option: tailoring the development of human beings to an ideal blueprint. Precisely whose ideal gets blueprinted is a point that we deal with later.

Thirdly, in the spirit of work in ethics that makes use of a technical idea of personhood, the view that moral status is independent of membership of a particular species (or indeed any biological species), transhumanism presents a way in which moral status can be shown to be bound to intellectual capacity rather than to human embodiment as such or human vulnerability in the capacity of embodiment (Harris, 1985).9a

What can be said against it?

Critics point to consequences of transhumanism, which they find unpalatable. One possible consequence feared by some commentators is that, in effect, transhumanism will lead to the existence of two distinct types of being, the human and the posthuman. The human may be incapable of breeding with the posthuman and will be seen as having a much lower moral standing. Given that, as Buchanan et al9 note, much moral progress, in the West at least, is founded on the category of the human in terms of rights claims, if we no longer have a common humanity, what rights, if any, ought to be enjoyed by transhumans? This can be viewed either as a criticism (we poor humans are no longer at the top of the evolutionary tree) or simply as a critical concern that invites further argumentation. We shall return to this idea in the final section, by way of identifying a deeper problem with the open‐endedness of transhumanism that builds on this recognition.

In the same vein, critics may argue that transhumanism will increase inequalities between the rich and the poor. The rich can afford to make use of transhumanism, but the poor will not be able to. Indeed, we may come to think of such people as deficient, failing to achieve a new heightened level of normal functioning.9 In the opposing direction, critical observers may say that transhumanism is, in reality, an irrelevance, as very few will be able to use the technological developments even if they ever manifest themselves. A further possibility is that transhumanism could lead to the extinction of humans and posthumans, for things are just as likely to turn out for the worse as for the better (eg, those for precautionary principle).

One of the deeper philosophical objections comes from a very traditional source. Like all such utopian visions, transhumanism rests on some conception of good. So just as humanism is founded on the idea that humans are the measure of all things and that their fulfilment is to be found in the powers of reason extolled and extended in culture and education, so too transhumanism has a vision of the good, albeit one loosely shared. For one group of transhumanists, the good is the expansion of personal choice. Given that autonomy is so widely valued, why not remove the barriers to enhanced autonomy by various technological interventions? Theological critics especially, but not exclusively, object to what they see as the imperialising of autonomy. Elshtain10 lists the three c’s: choice, consent and control. These, she asserts, are the dominant motifs of modern American culture. And there is, of course, an army of communitarians (Bellah et al,10a MacIntyre,10b Sandel,10c Taylor10d and Walzer10e) ready to provide support in general moral and political matters to this line of criticism. One extension of this line of transhumanism thinking is to align the valorisation of autonomy with economic rationality, for we may as well be motivated by economic concerns as by moral ones where the market is concerned. As noted earlier, only a small minority may be able to access this technology (despite Boström’s naive disclaimer for democratic transhumanism), so the technology necessary for transhumanist transformations is unlikely to be prioritised in the context of artificially scarce public health resources. One other population attracted to transhumanism will be the elite sports world, fuelled by the media commercialisation complex—where mere mortals will get no more than a glimpse of the transhuman in competitive physical contexts. There may be something of a double‐binding character to this consumerism. The poor, at once removed from the possibility of such augmentation, pay (per view) for the pleasure of their envy.

If we argue against the idea that the good cannot be equated with what people choose simpliciter, it does not follow that we need to reject the requisite medical technology outright. Against the more moderate transhumanists, who see transhumanism as an opportunity to enhance the general quality of life for humans, it is nevertheless true that their position presupposes some conception of the good. What kind of traits is best engineered into humans: disease resistance or parabolic hearing? And unsurprisingly, transhumanists disagree about precisely what “objective goods” to select for installation into humans or posthumans.

Some radical critics of transhumanism see it as a threat to morality itself.1,11 This is because they see morality as necessarily connected to the kind of vulnerability that accompanies human nature. Think of the idea of human rights and the power this has had in voicing concern about the plight of especially vulnerable human beings. As noted earlier a transhumanist may be thought to be beyond humanity and as neither enjoying its rights nor its obligations. Why would a transhuman be moved by appeals to human solidarity? Once the prospect of posthumanism emerges, the whole of morality is thus threatened because the existence of human nature itself is under threat.

One further objection voiced by Habermas11 is that interfering with the process of human conception, and by implication human constitution, deprives humans of the “naturalness which so far has been a part of the taken‐for‐granted background of our self‐understanding as a species” and “Getting used to having human life biotechnologically at the disposal of our contingent preferences cannot help but change our normative self‐understanding” (p 72).

On this account, our self‐understanding would include, for example, our essential vulnerability to disease, ageing and death. Suppose the strong transhumanism project is realised. We are no longer thus vulnerable: immortality is a real prospect. Nevertheless, conceptual caution must be exercised here—even transhumanists will be susceptible in the manner that Hobbes12 noted. Even the strongest are vulnerable in their sleep. But the kind of vulnerability transhumanism seeks to overcome is of the internal kind (not Hobbes’s external threats). We are reminded of Woody Allen’s famous remark that he wanted to become immortal, not by doing great deeds but simply by not dying. This will result in a radical change in our self‐understanding, which has inescapably normative elements to it that need to be challenged. Most radically, this change in self‐understanding may take the form of a change in what we view as a good life. Hitherto a human life, this would have been assumed to be finite. Transhumanists suggest that even now this may change with appropriate technology and the “right” motivation.

Do the changes in self‐understanding presented by transhumanists (and genetic manipulation) necessarily have to represent a change for the worse? As discussed earlier, it may be that the technology that generates the possibility of transhumanism can be used for the good of humans—for example, to promote immunity to disease or to increase quality of life. Is there really an intrinsic connection between acquisition of the capacity to bring about transhumanism and moral decline? Perhaps Habermas’s point is that moral decline is simply more likely to occur once radical enhancement technologies are adopted as a practice that is not intrinsically evil or morally objectionable. But how can this be known in advance? This raises the spectre of slippery slope arguments.

But before we discuss such slopes, let us note that the kind of approach (whether characterised as closed‐minded or sceptical) Boström seems to dislike is one he calls speculative. He dismisses as speculative the idea that offspring may think themselves lesser beings, commodifications of their parents’ egoistic desires (or some such). None the less, having pointed out the lack of epistemological standing of such speculation, he invites us to his own apparently more congenial position:

We might speculate, instead, that germ‐line enhancements will lead to more love and parental dedication. Some mothers and fathers might find it easier to love a child who, thanks to enhancements, is bright, beautiful, healthy, and happy. The practice of germ‐line enhancement might lead to better treatment of people with disabilities, because a general demystification of the genetic contributions to human traits could make it clearer that people with disabilities are not to blame for their disabilities and a decreased incidence of some disabilities could lead to more assistance being available for the remaining affected people to enable them to live full, unrestricted lives through various technological and social supports. Speculating about possible psychological or cultural effects of germ‐line engineering can therefore cut both ways. Good consequences no less than bad ones are possible. In the absence of sound arguments for the view that the negative consequences would predominate, such speculations provide no reason against moving forward with the technology. Ruminations over hypothetical side effects may serve to make us aware of things that could go wrong so that we can be on the lookout for untoward developments. By being aware of the perils in advance, we will be in a better position to take preventive countermeasures. (Boström, 2003, p 498)

Following Boström’s3 speculation then, what grounds for hope exist? Beyond speculation, what kinds of arguments does Boström offer? Well, most people may think that the burden of proof should fall to the transhumanists. Not so, according to Boström. Assuming the likely enormous benefits, he turns the tables on this intuition—not by argument but by skilful rhetorical speculation. We quote for accuracy of representation (emphasis added):

Only after a fair comparison of the risks with the likely positive consequences can any conclusion based on a cost‐benefit analysis be reached. In the case of germ‐line enhancements, the potential gains are enormous. Only rarely, however, are the potential gains discussed, perhaps because they are too obvious to be of much theoretical interest. By contrast, uncovering subtle and non‐trivial ways in which manipulating our genome could undermine deep values is philosophically a lot more challenging. But if we think about it, we recognize that the promise of genetic enhancements is anything but insignificant. Being free from severe genetic diseases would be good, as would having a mind that can learn more quickly, or having a more robust immune system. Healthier, wittier, happier people may be able to reach new levels culturally. To achieve a significant enhancement of human capacities would be to embark on the transhuman journey of exploration of some of the modes of being that are not accessible to us as we are currently constituted, possibly to discover and to instantiate important new values. On an even more basic level, genetic engineering holds great potential for alleviating unnecessary human suffering. Every day that the introduction of effective human genetic enhancement is delayed is a day of lost individual and cultural potential, and a day of torment for many unfortunate sufferers of diseases that could have been prevented. Seen in this light,proponents of a ban or a moratorium on human genetic modification must take on a heavy burden of proof in order to have the balance of reason tilt in their favor. (Bostrom,3 pp 498–9).

Now one way in which such a balance of reason may be had is in the idea of a slippery slope argument. We now turn to that.

Transhumanism and slippery slopes

A proper assessment of transhumanism requires consideration of the objection that acceptance of the main claims of transhumanism will place us on a slippery slope. Yet, paradoxically, both proponents and detractors of transhumanism may exploit slippery slope arguments in support of their position. It is necessary therefore to set out the various arguments that fall under this title so that we can better characterise arguments for and against transhumanism. We shall therefore examine three such attempts13,14,15 but argue that the arbitrary slippery slope15 may undermine all versions of transhumanists, although not every enhancement proposed by them.

Schauer13 offers the following essentialist analysis of slippery slope arguments. A “pure” slippery slope is one where a “particular act, seemingly innocuous when taken in isolation, may yet lead to a future host of similar but increasingly pernicious events”. Abortion and euthanasia are classic candidates for slippery slope arguments in public discussion and policy making. Against this, however, there is no reason to suppose that the future events (acts or policies) down the slope need to display similarities—indeed we may propose that they will lead to a whole range of different, although equally unwished for, consequences. The vast array of enhancements proposed by transhumanists would not be captured under this conception of a slippery slope because of their heterogeneity. Moreover, as Sternglantz16 notes, Schauer undermines his case when arguing that greater linguistic precision undermines the slippery slope and that indirect consequences often bolster slippery slope arguments. It is as if the slippery slopes would cease in a world with greater linguistic precision or when applied only to direct consequences. These views do not find support in the later literature. Schauer does, however, identify three non‐slippery slope arguments where the advocate’s aim is (a) to show that the bottom of a proposed slope has been arrived at; (b) to show that a principle is excessively broad; (c) to highlight how granting authority to X will make it more likely that an undesirable outcome will be achieved. Clearly (a) cannot properly be called a slippery slope argument in itself, while (b) and (c) often have some role in slippery slope arguments.

The excessive breadth principle can be subsumed under Bernard Williams’s distinction between slippery slope arguments with (a) horrible results and (b) arbitrary results. According to Williams, the nature of the bottom of the slope allows us to determine which category a particular argument falls under. Clearly, the most common form is the slippery slope to a horrible result argument. Walton14 goes further in distinguishing three types: (a) thin end of the wedge or precedent arguments; (b) Sorites arguments; and (c) domino‐effect arguments. Importantly, these arguments may be used both by antagonists and also by advocates of transhumanism. We shall consider the advocates of transhumanism first.

In the thin end of the wedge slippery slopes, allowing P will set a precedent that will allow further precedents (Pn) taken to an unspecified problematic terminus. Is it necessary that the end point has to be bad? Of course this is the typical linguistic meaning of the phrase “slippery slopes”. Nevertheless, we may turn the tables here and argue that [the] slopes may be viewed positively too.17 Perhaps a new phrase will be required to capture ineluctable slides (ascents?) to such end points. This would be somewhat analogous to the ideas of vicious and virtuous cycles. So transhumanists could argue that, once the artificial generation of life through technologies of in vitro fertilisation was thought permissible, the slope was foreseeable, and transhumanists are doing no more than extending that life‐creating and fashioning impulse.

In Sorites arguments, the inability to draw clear distinctions has the effect that allowing P will not allow us to consistently deny Pn. This slope follows the form of the Sorites paradox, where taking a grain of sand from a heap does not prevent our recognising or describing the heap as such, even though it is not identical with its former state. At the heart of the problem with such arguments is the idea of conceptual vagueness. Yet the logical distinctions used by philosophers are often inapplicable in the real world.15,18 Transhumanists may well seize on this vagueness and apply a Sorites argument as follows: as therapeutic interventions are currently morally permissible, and there is no clear distinction between treatment and enhancement, enhancement interventions are morally permissible too. They may ask whether we can really distinguish categorically between the added functionality of certain prosthetic devices and sonar senses.

In domino‐effect arguments, the domino conception of the slippery slope, we have what others often refer to as a causal slippery slope.19 Once P is allowed, a causal chain will be effected allowing Pn and so on to follow, which will precipitate increasingly bad consequences.

In what ways can slippery slope arguments be used against transhumanism? What is wrong with transhumanism? Or, better, is there a point at which we can say transhumanism is objectionable? One particular strategy adopted by proponents of transhumanism falls clearly under the aspect of the thin end of the wedge conception of the slippery slope. Although some aspects of their ideology seem aimed at unqualified goods, there seems to be no limit to the aspirations of transhumanism as they cite the powers of other animals and substances as potential modifications for the transhumanist. Although we can admire the sonic capacities of the bat, the elastic strength of lizards’ tongues and the endurability of Kevlar in contrast with traditional construction materials used in the body, their transplantation into humans is, to coin Kass’s celebrated label, “repugnant” (Kass, 1997).19a

Although not all transhumanists would support such extreme enhancements (if that is indeed what they are), less radical advocates use justifications that are based on therapeutic lines up front with the more Promethean aims less explicitly advertised. We can find many examples of this manoeuvre. Take, for example, the Cognitive Enhancement Research Institute in California. Prominently displayed on its website front page (http://www.ceri.com/) we read, “Do you know somebody with Alzheimer’s disease? Click to see the latest research breakthrough.” The mode is simple: treatment by front entrance, enhancement by the back door. Borgmann,20 in his discussion of the uses of technology in modern society, observed precisely this argumentative strategy more than 20 years ago:

The main goal of these programs seems to be the domination of nature. But we must be more precise. The desire to dominate does not just spring from a lust of power, from sheer human imperialism. It is from the start connected with the aim of liberating humanity from disease, hunger, and toil and enriching life with learning, art and athletics.

Who would want to deny the powers of viral diseases that can be genetically treated? Would we want to draw the line at the transplantation of non‐human capacities (sonar path finding)? Or at in vivo fibre optic communications backbone or anti‐degeneration powers? (These would have to be non‐human by hypothesis). Or should we consider the scope of technological enhancements that one chief transhumanist, Natasha Vita More21, propounds:

A transhuman is an evolutionary stage from being exclusively biological to becoming post‐biological. Post‐biological means a continuous shedding of our biology and merging with machines. (…) The body, as we transform ourselves over time, will take on different types of appearances and designs and materials. (…)

For hiking a mountain, I’d like extended leg strength, stamina, a skin‐sheath to protect me from damaging environmental aspects, self‐moisturizing, cool‐down capability, extended hearing and augmented vision (Network of sonar sensors depicts data through solid mass and map images onto visual field. Overlay window shifts spectrum frequencies. Visual scratch pad relays mental ideas to visual recognition bots. Global Satellite interface at micro‐zoom range).

For a party, I’d like an eclectic look ‐ a glistening bronze skin with emerald green highlights, enhanced height to tower above other people, a sophisticated internal sound system so that I could alter the music to suit my own taste, memory enhance device, emotional‐select for feel‐good people so I wouldn’t get dragged into anyone’s inappropriate conversations. And parabolic hearing so that I could listen in on conversations across the room if the one I was currently in started winding down.

Notwithstanding the difficulty of bringing together transhumanism under one movement, the sheer variety of proposals merely contained within Vita More’s catalogue means that we cannot determinately point to a precise station at which we can say, “Here, this is the end we said things would naturally progress to.” But does this pose a problem? Well, it certainly makes it difficult to specify exactly a “horrible result” that is supposed to be at the bottom of the slope. Equally, it is extremely difficult to say that if we allow precedent X, it will allow practices Y or Z to follow as it is not clear how these practices Y or Z are (if at all) connected with the precedent X. So it is not clear that a form of precedent‐setting slippery slope can be strictly used in every case against transhumanism, although it may be applicable in some.

Nevertheless, we contend, in contrast with Boström that the burden of proof would fall to the transhumanist. Consider in this light, a Sorites‐type slope. The transhumanist would have to show how the relationship between the therapeutic practices and the enhancements are indeed transitive. We know night from day without being able to specify exactly when this occurs. So simply because we cannot determine a precise distinction between, say, genetic treatments G1, G2 and G3, and transhumanism enhancements T1, T2 and so on, it does not follow that there are no important moral distinctions between G1 and T20. According to Williams,15 this kind of indeterminacy arises because of the conceptual vagueness of certain terms. Yet, the indeterminacy of so open a predicate “heap” is not equally true of “therapy” or “enhancement”. The latitude they permit is nowhere near so wide.

Instead of objecting to Pn on the grounds that Pn is morally objectionable (ie, to depict a horrible result), we may instead, after Williams, object that the slide from P to Pn is simply morally arbitrary, when it ought not to be. Here, we may say, without specifying a horrible result, that it would be difficult to know what, in principle, can ever be objected to. And this is, quite literally, what is troublesome. It seems to us that this criticism applies to all categories of transhumanism, although not necessarily to all enhancements proposed by them. Clearly, the somewhat loose identity of the movement—and the variations between strong and moderate versions—makes it difficult to sustain this argument unequivocally. Still the transhumanist may be justified in asking, “What is wrong with arbitrariness?” Let us consider one brief example. In aspects of our lives, as a widely shared intuition, we may think that in the absence of good reasons, we ought not to discriminate among people arbitrarily. Healthcare may be considered to be precisely one such case. Given the ever‐increasing demand for public healthcare services and products, it may be argued that access to them typically ought to be governed by publicly disputable criteria such as clinical need or potential benefit, as opposed to individual choices of an arbitrary or subjective nature. And nothing in transhumanism seems to allow for such objective dispute, let alone prioritisation. Of course, transhumanists such as More find no such disquietude. His phrase “No more timidity” is a typical token of transhumanist slogans. We applaud advances in therapeutic medical technologies such as those from new genetically based organ regeneration to more familiar prosthetic devices. Here the ends of the interventions are clearly medically defined and the means regulated closely. This is what prevents transhumanists from adopting a Sorites‐type slippery slope. But in the absence of a telos, of clearly and substantively specified ends (beyond the mere banner of enhancement), we suggest that the public, medical professionals and bioethicists alike ought to resist the potentially open‐ended transformations of human nature. For if all transformations are in principle enchancements, then surely none are. The very application of the word may become redundant. Thus it seems that one strong argument against transhumanism generally—the arbitrary slippery slope—presents a challenge to transhumanism, to show that all of what are described as transhumanist enhancements are imbued with positive normative force and are not merely technological extensions of libertarianism, whose conception of the good is merely an extension of individual choice and consumption.

Limits of transhumanist arguments for medical technology and practice

Already, we have seen the misuse of a host of therapeutically designed drugs used by non‐therapeutic populations for enhancements. Consider the non‐therapeutic use of human growth hormone in non‐clinical populations. Such is the present perception of height as a positional good in society that Cuttler et al22 report that the proportion of doctors who recommended human growth hormone treatment of short non‐growth hormone deficient children ranged from 1% to 74%. This is despite its contrary indication in professional literature, such as that of the Pediatric Endocrine Society, and considerable doubt about its efficacy.23,24 Moreover, evidence supports the view that recreational body builders will use the technology, given the evidence of their use or misuse of steroids and other biotechnological products.25,26 Finally, in the sphere of elite sport, which so valorises embodied capacities that may be found elsewhere in greater degree, precision and sophistication in the animal kingdom or in the computer laboratory, biomedical enhancers may latch onto the genetically determined capacities and adopt or adapt them for their own commercially driven ends.

The arguments and examples presented here do no more than to warn us of the enhancement ideologies, such as transhumanism, which seek to predicate their futuristic agendas on the bedrock of medical technological progress aimed at therapeutic ends and are secondarily extended to loosely defined enhancement ends. In discussion and in bioethical literatures, the future of genetic engineering is often challenged by slippery slope arguments that lead policy and practice to a horrible result. Instead of pointing to the undesirability of the ends to which transhumanism leads, we have pointed out the failure to specify their telos beyond the slogans of “overcoming timidity” or Boström’s3 exhortation that the passive acceptance of ageing is an example of “reckless and dangerous barriers to urgently needed action in the biomedical sphere”.

We propose that greater care be taken to distinguish the slippery slope arguments that are used in the emotionally loaded exhortations of transhumanism to come to a more judicious perspective on the technologically driven agenda for biomedical enhancement. Perhaps we would do better to consider those other all‐too‐human frailties such as violent aggression, wanton self‐harming and so on, before we turn too readily to the richer imaginations of biomedical technologists.

Footnotes

Competing interests: None.

References

1. Fukuyama F. Transhumanism. Foreign Policy 2004. 12442–44.44
2. Boström N. The fable of the dragon tyrant. J Med Ethics 2005. 31231–237.237 [PMC free article] [PubMed]
3. Boström N. Human genetic enhancements: a transhumanist perspective. J Value Inquiry 2004. 37493–506.506[PubMed]
4. Boström N. Transhumanist values. http://www.nickBostr öm com/ethics/values.h tml (accessed 19 May 2005).
5. Dyens O. The evolution of man: technology takes over. In: Trans Bibbee EJ, Dyens O, eds. Metal and flesh.L. ondon: MIT Press, 2001
6. World Transhumanist Association http://www.transhumanism.org/index.php/WTA/index/ (accessed 7 Apr 2006)
7. More M. Transhumanism: towards a futurist philosophy. http://www.maxmore.com/transhum.htm 1996 (accessed 20 Jul 2005)
8. More M. http://www.mactonnies.com/trans.html 2005 (accessed 13 Jul 2005)
9. Buchanan A, Brock D W, Daniels N. et alFrom chance to choice: genetics and justice. Cambridge: Cambridge University Press, 2000
9a. Harris J. The Value of Life. London: Routledge. 1985
10. Elshtain B. ed. The body and the quest for control. Is human nature obsolete?. Cambridge, MA: MIT Press, 2004. 155–174.174
10a. Bellah R N. et alHabits of the heart: individualism and commitment in American life. Berkeley: University of California Press. 1996
10b. MacIntyre A C. After virtue. (2nd ed) London: Duckworth. 1985
10c. Sandel M. Liberalism and the limits of justice. Cambridge: Cambridge University Press. 1982
10d. Taylor C. The ethics of authenticity. Boston: Harvard University Press. 1982
10e. Walzer M. Spheres of Justice. New York: Basic Books. 1983
11. Habermas J. The future of human nature. Cambridge: Polity, 2003
12. Hobbes T. In: Oakeshott M, ed. Leviathan. London: MacMillan, 1962
13. Schauer F. Slippery slopes. Harvard Law Rev 1985. 99361–383.383
14. Walton D N. Slippery slope arguments. Oxford: Clarendon, 1992
15. Williams B A O. Which slopes are slippery. In: Lockwood M, ed. Making sense of humanity. Cambridge: Cambridge University Press, 1995. 213–223.223
16. Sternglantz R. Raining on the parade of horribles: of slippery slopes, faux slopes, and Justice Scalia’s dissent in Lawrence v Texas, University of Pennsylvania Law Review, 153. Univ Pa Law Rev 2005. 1531097–1120.1120
17. Schubert L. Ethical implications of pharmacogenetics‐do slippery slope arguments matter? Bioethics 2004.18361–378.378 [PubMed]
18. Lamb D. Down the slippery slope. London: Croom Helm, 1988
19. Den Hartogh G. The slippery slope argument. In: Kuhse H, Singer P, eds. Companion to bioethics. Oxford: Blackwell, 2005. 280–290.290
19a. Kass L. The wisdom of repugnance. New Republic June 2, pp17–26 [PubMed]
20. Borgmann A. Technology and the character of everyday life. Chicago: University of Chicago Press, 1984
21. Vita More N. Who are transhumans? http://www.transhumanist.biz/interviews.htm, 2000 (accessed 7 Apr 2006)
22. Cuttler L, Silvers J B, Singh J. et al Short stature and growth hormone therapy: a national study of physician recommendation patterns. JAMA 1996. 276531–537.537 [PubMed]
23. Vance M L, Mauras N. Growth hormone therapy in adults and children. N Engl J Med 1999. 3411206–1216.1216 [PubMed]
24. Anon Guidelines for the use of growth hormone in children with short stature: a report by the Drug and Therapeutics Committee of the Lawson Wilkins Pediatric Endocrine Society. J Pediatr 1995. 127857–867.867[PubMed]
25. Grace F, Baker J S, Davies B. Anabolic androgenic steroid (AAS) use in recreational gym users. J Subst Use2001. 6189–195.195
26. Grace F, Baker J S, Davies B. Blood pressure and rate pressure product response in males using high‐dose anabolic androgenic steroids (AAS) J Sci Med Sport 2003. 6307–12, 2728.12, 2728 [PubMed]

Articles from Journal of Medical Ethics are provided here courtesy of BMJ Group

This article can also be found on the National Center for Biotechnology Information (NCBI) website at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2563415/

Humans 2.0 with Jason Silva

This is one of the Shots of Awe videos created by Jason Silva.  It’s called HUMAN 2.0.  I don’t think a description is in order here since all the Shots of Awe videos are short and sweet.

Runtime: 2:15

Video Info:

Published on Dec 2, 2014

“Your trivial-seeming self tracking app is part of something much bigger. It’s part of a new stage in the scientific method, another tool in the kit. It’s not the only thing going on, but it’s part of that evolutionary process.” – Ethan Zuckerman paraphrasing Kevin Kelly

Steven Johnson
“Chance favors the connected mind.”
http://www.ted.com/talks/steven_johns…

Additional footage courtesy of Monstro Design and http://nats.aero

For more information on Norton security, please go here: http://us.norton.com/boldlygo

Join Jason Silva every week as he freestyles his way into the complex systems of society, technology and human existence and discusses the truth and beauty of science in a form of existential jazz. New episodes every Tuesday.

Watch More Shots of Awe on TestTube http://testtube.com/shotsofawe

Subscribe now! http://www.youtube.com/subscription_c…

Jason Silva on Twitter http://twitter.com/jasonsilva

Jason Silva on Facebook http://facebook.com/jasonlsilva

Jason Silva on Google+ http://plus.google.com/10290664595165…

This video can also be found at https://www.youtube.com/watch?v=fXB5-iwNah0

How Much Longer Before Our First AI Catastrophe by George Dvorsky

This is an article called How Much Longer Before Our First AI Catastrophe?  Pretty pessimistic sounding title, right?  It’s actually not a bad article.  The primary focus is not on strong AI (artificial intelligence), as you might be assuming, but on weak AI.  My philosophy is the same as always on this one; be aware and be smart, fear will only cause problems.

How Much Longer Before Our First AI Catastrophe?

How Much Longer Before Our First AI Catastrophe?

What will happen in the days after the birth of the first true artificial intelligence? If things continue apace, this could prove to be the most dangerous time in human history. It will be an era of weak and narrow artificial intelligence, a highly dangerous combination that could wreak tremendous havoc on human civilization. Here’s why we’ll need to be ready.

First, let’s define some terms. The Technological Singularity, which you’ve probably heard of before, is the advent of recursively improving greater-than-human artificial general intelligence (or artificial superintelligence), or the development of strong AI (human-like artificial general intelligence).

But this particular concern has to do with the rise of weak AI — expert systems that match or exceed human intelligence in a narrowly defined area, but not in broader areas. As a consequence, many of these systems will work outside of human comprehension and control.

But don’t let the name fool you; there’s nothing weak about the kind of damage it could do.

Before the Singularity

The Singularity is often misunderstood as AI that’s simply smarter than humans, or the rise of human-like consciousness in a machine. Neither are the case. To a non-trivial degree, much of our AI already exceeds human capacities. It’s just not sophisticated and robust enough to do any significant damage to our infrastructure. The trouble will start to come when, in the case of the Singularity, a highly generalized AI starts to iteratively improve upon itself.

How Much Longer Before Our First AI Catastrophe?

And indeed, when the Singularity hits, it’ll be like, in the words of mathematician I. J. Good, anintelligence explosion — and it will indeed hit us like a bomb. Human control will forever be relegated to the sidelines, in whatever form that might take.

A pre-Singularity AI disaster or catastrophe, on the other hand, will be containable. But just barely. It’ll likely arise from an expert system or super-sophisticated algorithm run amok. And the worry is not so much its power — which is definitely a significant part of the equation — but the speed at which it will inflict the damage. By the time we have a grasp on what’s going on, something terrible may have happened.

Narrow AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots, take control of a factory or military installation, or unleash some kind of propagating blight that will be difficult to get rid of (whether in the digital realm or the real world). The possibilities are frighteningly endless.

Our infrastructure is becoming increasingly digital and interconnected — and by consequence, increasingly vulnerable. In a few decades, it will be brittle as glass, with the bulk of human activity dependant upon it.

And it is indeed a possibility. The signs are all there.

Accidents Will Happen

Back in 1988, a Cornell University student named Robert Morris scripted a software program that could measure the size of the Internet. To make it work, he equipped it with a few clever tricks to help it along its way, including an ability to exploit known vulnerabilities in popular utility programs running on UNIX. This allowed the program to break into those machines and copy itself, thus infecting those systems.

How Much Longer Before Our First AI Catastrophe?

On November 2, 1988, Morris introduced his program to the world. It quickly spread to thousands of computers, disrupting normal activities and Internet connectivity for days. Estimates put the cost of the damage anywhere between $10,000 to $100,000. Dubbed the “Morris Worm,” it’s considered the first worm in human history — one that prompted DARPA to fund the establishment of the CERT/CC at Carnegie Mellon University to anticipate and respond to this new kind of threat.

As for Morris, he was charged under the Computer Fraud and Abuse Act and given a $10,000 fine.

But the takeaway from the incident was clear: Despite our best intentions, accidents willhappen. And as we continue to develop and push our technologies forward, there’s always the chance that it will operate outside our expectations — and even our control.

Down to the Millisecond

Indeed, unintended consequences are one thing, containability is quite another. Our technologies are increasingly operating at levels beyond our real-time capacities. The best example of this comes from the world of high-frequency stock trading (HFT).

How Much Longer Before Our First AI Catastrophe?

In HFT, securities are traded on a rapid-fire basis through the use of powerful computers and algorithms. A single investment position can last for a few minutes — or a few milliseconds; there can be as many as 500 transactions made in a single second. This type of computer trading can result in thousands upon thousands of transactions a day, each and every one of them decided by super-sophisticated scripts. The human traders involved (such as they are) just sit back and watch, incredulous to the machinations happening at break-neck speeds.

“Back in the day, I used to be able to explain to a client how their trade was executed. Technology has made the trade process so convoluted and complex that I can’t do that any more,” noted PNC Wealth Management’s Jim Dunigan in a Markets Media article.

Clearly, the ability to assess market conditions and react quickly is a valuable asset to have. And indeed, according to a 2009 study, HFT firms accounted for 60 to 73% of all U.S. equity trading volume; but as of last year that number dropped to 50% — but it’s still considered a highly profitable form of trading.

To date, the most significant single incident involving HFT came at 2:45 on May 5th, 2010. For a period of about five minutes, the Dow Jones Industrial Average plummeted over 1,000 points (approximately 9%); for a few minutes, $1 trillion in market value vanished. About 600 points were recovered 20 minutes later. It’s now called the 2010 Flash Crash, the second largest point swing in history and the biggest one-day point decline.

The incident prompted an investigation by Gregg E. Berman, the U.S. Securities and Exchange Commission (SEC), and the Commodity Futures Trading Commission (CFTC). The investigators posited a number of theories (of which there are many, some of them quite complex), but their primary concern was the impact of HFT. They determined that the collective efforts of the algorithms exacerbated price declines; by selling aggressively, the trader-bots worked to eliminate their positions and withdraw from the market in the face of uncertainty.

The following year, an independent study concluded that technology played an important role, but that it wasn’t the entire story. Looking at the Flash Crash in detail, the authors argued that it was “the result of the new dynamics at play in the current market structure,” and the role played by “order toxicity.” At the same time, however, they noted that HFT traders exhibited trading patterns inconsistent with the traditional definition of market making, and that they were “aggressively [trading] in the direction of price changes.”

HFT is also playing an increasing role in currencies and commodities, making up about 28% of the total volume in futures markets. Not surprisingly, this area has become vulnerable to mini crashes. Following incidents involving the trading of cocoa and sugar, the Wall Street Journalhighlighted the growing concerns:

“The electronic platform is too fast; it doesn’t slow things down” like humans would, said Nick Gentile, a former cocoa floor trader. “It’s very frustrating” to go through these flash crashes, he said…

..The same is happening in the sugar market, provoking outrage within the industry. In a February letter to ICE, the World Sugar Committee, which represents large sugar users and producers, called algorithmic and high-speed traders “parasitic.”

Just how culpable HFT is to the phenomenon of flash crashes is an open question, but it’s clear that the trading environment is changing rapidly. Market analysts now speak in terms of “microstructures,” trading “circuit breakers,” and the “VPIN Flow Toxicity metric.” It’s also difficult to predict how serious future flash crashes could become. If insufficient measures aren’t put into place to halt these events when they happen, and assuming HFT is scaled-up in terms of market breadth, scope, and speed, it’s not unreasonable to think of events in which massive and irrecoverable losses might occur. And indeed, some analysts are already predicting systems that can support 100,000 transactions per second.

More to the point, HFT and flash crashes may not create an economic disaster — but it’s a potent example of how our other mission-critical systems may reach unprecedented tempos. As we defer critical decision making to our technological artifacts, and as they increase in power and speed, we are increasingly finding ourselves outside of the locus of control and comprehension.

When AI Screws Up, It Screws Up Badly

No doubt, we are already at the stage when computers exceed our ability to understand how and why they do the things they do. One of the best examples of this is IBM’s Watson, the expert computer system that trounced the world’s best Jeopardy players in 2011. To make it work, Watson’s developers scripted a series of programs that, when pieced together, created an overarching game-playing system. And they’re not entirely sure how it works.

David Ferrucci, the Leader Researcher of the project, put it this way:

Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.

Which is actually quite disturbing. And not so much because we don’t understand why it succeeds, but because we don’t necessarily understand why it fails. By virtue, we can’t understand or anticipate the nature of its mistakes.

How Much Longer Before Our First AI Catastrophe?

For example, Watson had one memorable gaff that clearly demonstrated how, when an AI fails, it fails big time. During the Final Jeopardy portion, it was asked, “Its largest airport is named for a World War II hero; its second largest, for a World War II battle.” Watson responded with, “What is Toronto?”

Given that Toronto’s Billy Bishop Airport is named after a war hero, that was not a terrible guess. But why this was such a blatant mistake is that the category was “U.S. Cities.” Toronto, not being a U.S. city, couldn’t possibly have been the correct answer.

Again, this is the important distinction that needs to be made when addressing the potential for a highly generalized AI. Weak, narrow systems are extremely powerful, but they’re also extremely stupid; they’re completely lacking in common sense. Given enough autonomy and responsibility, a failed answer or a wrong decision could be catastrophic.

As another example, take the recent initiative to give robots their very own Internet. By providing and sharing information amongst themselves, it’s hoped that these bots can learn without having to be programmed. A problem arises, however, when instructions for a task are mismatched — the result of an AI error. A stupid robot, acting without common sense, would simply execute upon the task even when the instructions are wrong. In another 30 to 40 years, one can only imagine the kind of damage that could be done, either accidentally, or by a malicious script kiddie.

Moreover, because expert systems like Watson will soon be able to conjure answers to questions that are beyond our comprehension, we won’t always know when they’re wrong. And that is a frightening prospect.

The Shape of Things to Come

It’s difficult to know exactly how, when, or where the first true AI catastrophe will occur, but we’re still several decades off. Our infrastructure is still not integrated or robust enough to allow for something really terrible to happen. But by the 2040s (if not sooner), our highly digital and increasingly interconnected world will be susceptible to these sorts of problems.

How Much Longer Before Our First AI Catastrophe?

By that time, our power systems (electric grids, nuclear plants, etc.) could be vulnerable to errors and deliberate attacks. Already today, the U.S. has been able to infiltrate the control system software known to run centrifuges in Iranian nuclear facilities by virtue of its Stuxnet program — an incredibly sophisticated computer virus (if you can call it that). This program represents the future of cyber-espionage and cyber-weaponry — and it’s a pale shadow of things to come.

In future, more advanced versions will likely be able to not just infiltrate enemy or rival systems, it could reverse-engineer it, inflict terrible damage — or even take control. But like the Morris Worm incident showed, it may be difficult to predict the downstream effects of these actions, particularly when dealing with autonomous, self-replicating code. It could also result in an AI arms race, with each side developing programs and counter-programs to get an edge on the other side’s technologies.

How Much Longer Before Our First AI Catastrophe?

And though it might seem like the premise of a scifi novel, an AI catastrophe could also involve the deliberate or accidental takeover of any system running off an AI. This could include integrated military equipment, self-driving vehicles (including airplanes), robots, and factories. Should something like this occur, the challenge will be to disable the malign script (or source program) as quickly as possible, which may not be easy.

More conceptually, and in the years immediately preceding the onset of uncontainable self-improving machine intelligence, a narrow AI could be used (again, either deliberately or unintentionally) to execute upon a poorly articulated goal. The powerful system could over-prioritize a certain aspect, or grossly under-prioritize another. And it could make sweeping changes in the blink of an eye.

Hopefully, if and when this does happen, it will be containable and relatively minor in scope. But it will likely serve as a call to action in anticipation of more catastrophic episodes. As for now, and in consideration of these possibilities, we need to ensure that our systems are secure, smart, and resilient.

Images: Shutterstock/agsandrew; Washington Times; TIME, Potapov Alexander/Shutterstock.

This article can also be found on the io9 website at http://io9.com/how-much-longer-before-our-first-ai-catastrophe-464043243

The Guardian Interview with Ray Kurzweil

This is an excellent article from the Guardian entitled, “Are the robots about to rise? Google’s new director of engineering thinks so…”  If you’re just getting familiar with the concepts of the sigularity and machine learning and transhumanism… then this is an excellent article to read.  I think the Guardian did a great job of presenting Ray Kurzweil’s ideas open-mindedly and without bias while, at the same time, keeping a critical eye to the facts.  The following is a quote from this article which I found compelling, “…the Google knowledge graph, which consists of 800m (million) concepts and the billions of relationships between them. This is already a neural network, a massive, distributed global “brain”. Can it learn? Can it think? It’s what some of the smartest people on the planet are working on…”  Wow!

Are the robots about to rise? Google’s new director of engineering thinks so…

Ray Kurzweil popularised the Teminator-like moment he called the ‘singularity’, when artificial intelligence overtakes human thinking. But now the man who hopes to be immortal is involved in the very same quest – on behalf of the tech behemothSee our gallery of cinematic killer robots

Robot from The Terminator
The Terminator films envisage a future in which robots have become sentient and are at war with humankind. Ray Kurzweil thinks that machines could become ‘conscious’ by 2029, but believes they will augment us. Photograph: Solent News/Rex

With the fact that he believes that he has a good chance of living for ever? He just has to stay alive “long enough” to be around for when the great life-extending technologies kick in (he’s 66 and he believes that “some of the baby-boomers will make it through”). Or with the fact that he’s predicted that in 15 years’ time, computers are going to trump people. That they will be smarter than we are. Not just better at doing sums than us and knowing what the best route is to Basildon. They already do that. But that they will be able to understand what we say, learn from experience, crack jokes, tell stories, flirt. Ray Kurzweil believes that, by 2029, computers will be able to do all the things that humans do. Only better.

But then everyone’s allowed their theories. It’s just that Kurzweil’s theories have a habit of coming true. And, while he’s been a successful technologist and entrepreneur and invented devices that have changed our world – the first flatbed scanner, the first computer program that could recognise a typeface, the first text-to-speech synthesizer and dozens more – and has been an important and influential advocate of artificial intelligence and what it will mean, he has also always been a lone voice in, if not quite a wilderness, then in something other than the mainstream.

And now? Now, he works at Google. Ray Kurzweil who believes that we can live for ever and that computers will gain what looks like a lot like consciousness in a little over a decade is now Google’s director of engineering. The announcement of this, last year, was extraordinary enough. To people who work with tech or who are interested in tech and who are familiar with the idea that Kurzweil has popularised of “the singularity” – the moment in the future when men and machines will supposedly converge – and know him as either a brilliant maverick and visionary futurist, or a narcissistic crackpot obsessed with longevity, this was headline news in itself.

But it’s what came next that puts this into context. It’s since been revealed that Google has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest artificial intelligence laboratory on Earth; a laboratory designed to feast upon a resource of a kind that the world has never seen before: truly massive data. Our data. From the minutiae of our lives.

Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.

And those are just the big deals. It also bought Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It hired Geoff Hinton, a British computer scientist who’s probably the world’s leading expert on neural networks. And it has embarked upon what one DeepMind investor told the technology publication Re/code two weeks ago was “a Manhattan project of AI”. If artificial intelligence was really possible, and if anybody could do it, he said, “this will be the team”. The future, in ways we can’t even begin to imagine, will be Google’s.

There are no “ifs” in Ray Kurzweil’s vocabulary, however, when I meet him in his new home – a high-rise luxury apartment block in downtown San Francisco that’s become an emblem for the city in this, its latest incarnation, the Age of Google. Kurzweil does not do ifs, or doubt, and he most especially doesn’t do self-doubt. Though he’s bemused about the fact that “for the first time in my life I have a job” and has moved from the east coast where his wife, Sonya, still lives, to take it.

Ray Kurzweil photographed in San Francisco last year.
Ray Kurzweil photographed in San Francisco last year. Photograph: Zackary Canepari/Panos Pictures

Bill Gates calls him “the best person I know at predicting the future of artificial intelligence”. He’s received 19 honorary doctorates, and he’s been widely recognised as a genius. But he’s the sort of genius, it turns out, who’s not very good at boiling a kettle. He offers me a cup of coffee and when I accept he heads into the kitchen to make it, filling a kettle with water, putting a teaspoon of instant coffee into a cup, and then moments later, pouring the unboiled water on top of it. He stirs the undissolving lumps and I wonder whether to say anything but instead let him add almond milk – not eating dairy is just one of his multiple dietary rules – and politely say thank you as he hands it to me. It is, by quite some way, the worst cup of coffee I have ever tasted.

But then, he has other things on his mind. The future, for starters. And what it will look like. He’s been making predictions about the future for years, ever since he realised that one of the key things about inventing successful new products was inventing them at the right moment, and “so, as an engineer, I collected a lot of data”. In 1990, he predicted that a computer would defeat a world chess champion by 1998. In 1997, IBM’s Deep Blue defeated Garry Kasparov. He predicted the explosion of the world wide web at a time it was only being used by a few academics and he predicted dozens and dozens of other things that have largely come true, or that will soon, such as that by the year 2000, robotic leg prostheses would allow paraplegics to walk (the US military is currently trialling an “Iron Man” suit) and “cybernetic chauffeurs” would be able to drive cars (which Google has more or less cracked).

His critics point out that not all his predictions have exactly panned out (no US company has reached a market capitalisation of more than $1 trillion; “bioengineered treatments” have yet to cure cancer). But in any case, the predictions aren’t the meat of his work, just a byproduct. They’re based on his belief that technology progresses exponentially (as is also the case in Moore’s law, which sees computers’ performance doubling every two years). But then you just have to dig out an old mobile phone to understand that. The problem, he says, is that humans don’t think about the future that way. “Our intuition is linear.”

When Kurzweil first started talking about the “singularity”, a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human – will be passed in 2029. The difference is that when he began saying it, the fax machine hadn’t been invented. But now, well… it’s another story.

“My book The Age of Spiritual Machines came out in 1999 and that we had a conference of AI experts at Stanford and we took a poll by hand about when you think the Turing test would be passed. The consensus was hundreds of years. And a pretty good contingent thought that it would never be done.

“And today, I’m pretty much at the median of what AI experts think and the public is kind of with them. Because the public has seen things like Siri [the iPhone’s voice-recognition technology] where you talk to a computer, they’ve seen the Google self-driving cars. My views are not radical any more. I’ve actually stayed consistent. It’s the rest of the world that’s changing its view.”

And yet, we still haven’t quite managed to get to grips with what that means. The Spike Jonze film, Her, which is set in the near future and has Joaquin Phoenix falling in love with a computer operating system, is not so much fantasy, according to Kurzweil, as a slightly underambitious rendering of the brave new world we are about to enter. “A lot of the dramatic tension is provided by the fact that Theodore’s love interest does not have a body,” Kurzweil writes in a recent review of it. “But this is an unrealistic notion. It would be technically trivial in the future to provide her with a virtual visual presence to match her virtual auditory presence.”

But then he predicts that by 2045 computers will be a billion times more powerful than all of the human brains on Earth. And the characters’ creation of an avatar of a dead person based on their writings, in Jonze’s film, is an idea that he’s been banging on about for years. He’s gathered all of his father’s writings and ephemera in an archive and believes it will be possible to retro-engineer him at some point in the future.

So far, so sci-fi. Except that Kurzweil’s new home isn’t some futuristic MegaCorp intent on world domination. It’s not Skynet. Or, maybe it is, but we largely still think of it as that helpful search engine with the cool design. Kurzweil has worked with Google’s co-founder Larry Page on special projects over several years. “And I’d been having ongoing conversations with him about artificial intelligence and what Google is doing and what I was trying to do. And basically he said, ‘Do it here. We’ll give you the independence you’ve had with your own company, but you’ll have these Google-scale resources.'”

And it’s the Google-scale resources that are beyond anything the world has seen before. Such as the huge data sets that result from 1 billion people using Google ever single day. And the Google knowledge graph, which consists of 800m concepts and the billions of relationships between them. This is already a neural network, a massive, distributed global “brain”. Can it learn? Can it think? It’s what some of the smartest people on the planet are working on next.

Peter Norvig, Google’s research director, said recently that the company employs “less than 50% but certainly more than 5%” of the world’s leading experts on machine learning. And that was before it bought DeepMind which, it should be noted, agreed to the deal with the proviso that Google set up an ethics board to look at the question of what machine learning will actually mean when it’s in the hands of what has become the most powerful company on the planet. Of what machine learning might look like when the machines have learned to make their own decisions. Or gained, what we humans call, “consciousness”.

Garry Kasparov ponders a move against IBM
Garry Kasparov ponders a move against IBM’s Deep Blue. Ray Kurzweil predicted the computer’s triumph. Photograph: Stan Honda/AFP/Getty Images

 

I first saw Boston Dynamics’ robots in action at a presentation at the Singularity University, the university that Ray Kurzweil co-founded and that Google helped fund and which is devoted to exploring exponential technologies. And it was the Singularity University’s own robotics faculty member Dan Barry who sounded a note of alarm about what the technology might mean: “I don’t see any end point here,” he said when talking about the use of military robots. “At some point humans aren’t going to be fast enough. So what you do is that you make them autonomous. And where does that end? Terminator?”

And the woman who headed the Defence Advanced Research Projects Agency (Darpa), the secretive US military agency that funded the development of BigDog? Regina Dugan. Guess where she works now?

Kurzweil’s job description consists of a one-line brief. “I don’t have a 20-page packet of instructions,” he says. “I have a one-sentence spec. Which is to help bring natural language understanding to Google. And how they do that is up to me.”

Language, he believes, is the key to everything. “And my project is ultimately to base search on really understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.”

Google will know the answer to your question before you have asked it, he says. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.

The most successful example of natural-language processing so far is IBM’s computer Watson, which in 2011 went on the US quiz show Jeopardy and won. “And Jeopardy is a pretty broad task. It involves similes and jokes and riddles. For example, it was given “a long tiresome speech delivered by a frothy pie topping” in the rhyme category and quickly responded: “A meringue harangue.” Which is pretty clever: the humans didn’t get it. And what’s not generally appreciated is that Watson’s knowledge was not hand-coded by engineers. Watson got it by reading. Wikipedia – all of it.

Kurzweil says: “Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM’s Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I’m doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn’t understand the implications of what it’s reading. It’s doing a sort of pattern matching. It doesn’t understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn’t understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.”

And once the computers can read their own instructions, well… gaining domination over the rest of the universe will surely be easy pickings. Though Kurzweil, being a techno-optimist, doesn’t worry about the prospect of being enslaved by a master race of newly liberated iPhones with ideas above their station. He believes technology will augment us. Make us better, smarter, fitter. That just as we’ve already outsourced our ability to remember telephone numbers to their electronic embrace, so we will welcome nanotechnologies that thin our blood and boost our brain cells. His mind-reading search engine will be a “cybernetic friend”. He is unimpressed by Google Glass because he doesn’t want any technological filter between us and reality. He just wants reality to be that much better.

“I thought about if I had all the money in the world, what would I want to do?” he says. “And I would want to do this. This project. This is not a new interest for me. This idea goes back 50 years. I’ve been thinking about artificial intelligence and how the brain works for 50 years.”

The evidence of those 50 years is dotted all around the apartment. He shows me a cartoon he came up with in the 60s which shows a brain in a vat. And there’s a still from a TV quiz show that he entered aged 17 with his first invention: he’d programmed a computer to compose original music. On his walls are paintings that were produced by a computer programmed to create its own original artworks. And scrapbooks that detail the histories of various relatives, the aunts and uncles who escaped from Nazi Germany on the Kindertransport, his great grandmother who set up what he says was Europe’s first school to provide higher education for girls.

Jeopardy is won my a machine
Kurzweil suggests that language is the key to teaching machines to think. He says his job is to ‘base search on really understanding what the language means’.The most successful example of natural-language processing to date is IBM’s computer Watson, which in 2011 went on the US quiz show Jeopardy and won (shown above). Photograph: AP

 

His home is nothing if not eclectic. It’s a shiny apartment in a shiny apartment block with big glass windows and modern furnishings but it’s imbued with the sort of meaning and memories and resonances that, as yet, no machine can understand. His relatives escaped the Holocaust “because they used their minds. That’s actually the philosophy of my family. The power of human ideas. I remember my grandfather coming back from his first return visit to Europe. I was seven and he told me he’d been given the opportunity to handle – with his own hands – original documents by Leonardo da Vinci. He talked about it in very reverential terms, like these were sacred documents. But they weren’t handed down to us by God. They were created by a guy, a person. A single human had been very influential and had changed the world. The message was that human ideas changed the world. And that is the only thing that could change the world.”

On his fingers are two rings, one from the Massachusetts Institute of Technology, where he studied, and another that was created by a 3D printer, and on his wrist is a 30-year-old Mickey Mouse watch. “It’s very important to hold on to our whimsy,” he says when I ask him about it. Why? “I think it’s the highest level of our neocortex. Whimsy, humour…”

Even more engagingly, tapping away on a computer in the study next door I find Amy, his daughter. She’s a writer and a teacher and warm and open, and while Kurzweil goes off to have his photo taken, she tells me that her childhood was like “growing up in the future”.

Is that what it felt like? “I do feel little bit like the ideas I grew up hearing about are now ubiquitous… Everything is changing so quickly and it’s not something that people realise. When we were kids people used to talk about what they going to do when they were older, and they didn’t necessarily consider how many changes would happen, and how the world would be different, but that was at the back of my head.”

And what about her father’s idea of living for ever? What did she make of that? “What I think is interesting is that all kids think they are going to live for ever so actually it wasn’t that much of a disconnect for me. I think it made perfect sense. Now it makes less sense.”

Well, yes. But there’s not a scintilla of doubt in Kurzweil’s mind about this. My arguments slide off what looks like his carefully moisturised skin. “My health regime is a wake-up call to my baby-boomer peers,” he says. “Most of whom are accepting the normal cycle of life and accepting they are getting to the end of their productive years. That’s not my view. Now that health and medicine is in information technology it is going to expand exponentially. We will see very dramatic changes ahead. According to my model it’s only 10-15 years away from where we’ll be adding more than a year every year to life expectancy because of progress. It’s kind of a tipping point in longevity.”

He does, at moments like these, have something of a mad glint in his eye. Or at least the profound certitude of a fundamentalist cleric. Newsweek, a few years back, quoted an anonymous colleague claiming that, “Ray is going through the single most public midlife crisis that any male has ever gone through.” His evangelism (and commercial endorsement) of a whole lot of dietary supplements has more than a touch of the “Dr Gillian McKeith (PhD)” to it. And it’s hard not to ascribe a psychological aspect to this. He lost his adored father, a brilliant man, he says, a composer who had been largely unsuccessful and unrecognised in his lifetime, at the age of 22 to a massive heart attack. And a diagnosis of diabetes at the age of 35 led him to overhaul his diet.

But isn’t he simply refusing to accept, on an emotional level, that everyone gets older, everybody dies?

“I think that’s a great rationalisation because our immediate reaction to hearing someone has died is that it’s not a good thing. We’re sad. We consider it a tragedy. So for thousands of years, we did the next best thing which is to rationalise. ‘Oh that tragic thing? That’s really a good thing.’ One of the major goals of religion is to come up with some story that says death is really a good thing. It’s not. It’s a tragedy. And people think we’re talking about a 95-year-old living for hundreds of years. But that’s not what we’re talking about. We’re talking radical life extension, radical life enhancement.

“We are talking about making ourselves millions of times more intelligent and being able to have virtually reality environments which are as fantastic as our imagination.”

Although possibly this is what Kurzweil’s critics, such as the biologist PZ Myers, mean when they say that the problem with Kurzweil’s theories is that “it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.” Or Jaron Lanier, who calls him “a genius” but “a product of a narcissistic age”.

But then, it’s Kurzweil’s single-mindedness that’s been the foundation of his success, that made him his first fortune when he was still a teenager, and that shows no sign of letting up. Do you think he’ll live for ever, I ask Amy. “I hope so,” she says, which seems like a reasonable thing for an affectionate daughter to wish for. Still, I hope he does too. Because the future is almost here. And it looks like it’s going to be quite a ride.

 

This article can also be found on the Guardian website.

 

The Singularity and the Methuselarity: Similarities and Differences by Aubrey de Grey

This is a paper written by Aubrey de Grey discussing the technological singularity vs the Methuselarity.  The original paper is entitled, “The singularity and the Methuselarity: similarities and differences” and can be found on the SENS Research Foundation website or you can follow this link: http://www.sens.org/files/pdf/FHTI07-deGrey.pdf

In: Strategy for the Future (Bushko R, ed.), 2008, in press.

The singularity and the Methuselarity: similarities and differences

Aubrey D.N.J. de Grey, Ph.D. Methuselah Foundation PO Box 1143, Lorton, VA 22079, USA Email: aubrey@sens.org

Abstract

Aging, being a composite of innumerable types of molecular and cellular decay, will be defeated incrementally. I have for some time predicted that this succession of advances will feature a threshold, which I here christen the “Methuselarity,” following which there will actually be a progressive decline in the rate of improvement in our anti-aging technology that is required to prevent a rise in our risk of death from age-related causes as we become chronologically older. Various commentators have observed the similarity of this prediction to that made by Good, Vinge, Kurzweil and others concerning technology in general (and, in particular, computer technology), which they have termed the “singularity.” In this essay I compare and contrast these two concepts.

The singularity: a uniquely unique event in humanity’s future

“Unique” is, of course, an over-used word to describe momentous events – arguably, even more overused than “historic.” How, then, can I dare to describe something as uniquely unique?

Well, I will begin by pulling back a fraction from that description. There are actually, in my view, two possible events in humanity’s future that merit this description. But I do not feel very bad about this qualification, because I believe that those two events are, in all probability, mutually exclusive. The singularity is one; the demise of humanity is the other. Hence my choice of the indefinite article: the singularity is not “the” uniquely unique event in humanity’s future, because it may not occur, but if it does occur, nothing comparable will either precede or follow it.

The singularity has been defined in many related but subtly distinct ways over the years, so let me begin my discussion of it by making clear what I mean by the term. I adhere to the following definition: “an asymptotically rapid increase in the sophistication of technology on whose behaviour humans depend.” I do not use the word to mean, for example, “the technological creation of smarter-than-human intelligence” (which is the definition currently given by SIAI, the Singularity Institute for Artificial Intelligence1 ) – despite my agreement with the view that the technology most likely to bring about the singularity (and, indeed, the one that was originally used to define it) is precisely the one that SIAI study, namely recursively self-improving artificial intelligence (of which more below). I am sticking to the more abstract definition partly because it seems to me to encapsulate the main point of why the singularity is indeed uniquely unique, and partly because it will help me to highlight what distinguishes the singularity from the Methuselarity.

One aspect of my definition that may raise eyebrows is its use of the word “asymptotically” rather than “exponentially.” I feel sure that von Neumann2 would agree with me on this: the mere perpetuation of Moore’s Law3 will not bring about the singularity. A gravitational singularity, which is of course the etymological source of the term, is the centre (not, I stress, the event horizon) of a black hole: the point at which the force of gravity is infinite – or, to be more precise, the point arbitrarily near to which gravity is arbitrarily strong. The distance between the singularity and any point of interest (inside or outside the event horizon) at which gravity is finite is, of course, finite. This is an asymptotic relation between distance and strength: if point X is distance Y from the singularity, it is not possible to travel from X, along the line between X and the singularity, by a distance greater than Y, and experience continuously increasing gravity. Exponential (though not inverse exponential! – see below) relations are not like this: they have no asymptote. If the force of gravity exerted by a particular body were exponential (though still increasing with decreasing distance from the body), the relation between distance from that body and gravity exerted by it would be defined in terms of distance from the point furthest away from it (“on the other side of the Universe”). Call the gravity exerted at that point X and suppose that the gravity exerted at half that distance from the body is 4X (which is the same as for gravity in real life). Then the gravity exerted by the body at a point arbitrarily close to it is not arbitrarily large – it is just 16X, since that point is exactly twice as far away from the point of minimum gravity as the 4X point is.

Having belaboured this point, I now hope to justify doing so. Will the technological singularity, defined as I define it above, happen at all? Not if we merely proceed according to Moore’s law, because that does not predict infinite rates of progress at any point in the future. But wait – who’s to say that progress will remain “only” exponential? Might not progress exceed this rate, following an inverse polynomial curve (like gravity) or even an inverse exponential curve? I, for one, don’t see why it shouldn’t. If we consider specifically the means whereby the Singularity is most widely expected to occur, namely the development of computers with the capacity for recursive improvement of their own workings,4 I can see no argument why the rate at which such a computer would improve itself should not follow an inverse exponential curve, i.e. one in which the time taken to achieve a given degree of improvement takes time X, the time taken to repeat that degree of improvement is X/2, then X/4 and so on.

Why does this matter? It might matter quite a lot, given that (in most people’s view, anyway) the purpose of creating computers that are smarter than us is to benefit us rather than to supersede us. Human intelligence, I believe, will not exhibit a super-exponential rate of growth, because our cognitive hardware is incompatible with that. Now, I grant that I have only rather wishy-washy intuitive reasons for this view – but what I think can be quite safely said is that our ability to “keep up” with the rate of progress of recursively self-improving computers will be in inverse relation to that rate, and thus that super-exponentially self-improving computers will be more likely to escape our control than “merely” exponentially self-improving ones will. Computers have hardware constraints too, of course, so the formal asymptotic limit of truly infinite rates of improvement (and, thus, truly infinite intelligence of such machines) will not be reached – but that is scant solace for those of us who have been superseded (which could, of course, mean “eliminated”) some time previously. There is, of course, the distinct possibility that even exponentially self-improving systems would similarly supersede us, but the work of SIAI and others to prevent this must be taken into account in quantifying that risk.

Let us now consider the aftermath of a “successful” singularity, i.e. one in which recursively selfimproving systems exist and have duly improved themselves out of sight, but have been built in such a way that they permanently remain “friendly” to us. It is legitimate to wonder what would happen next, albeit that to do so is in defiance of Vinge.5 While very little can confidently be said, I feel able to make one prediction: that our electronic guardians and minions will not be making their superintelligence terribly conspicuous to us. If we can define “friendly AI” as AI that permits us as a species to follow our preferred, presumably familiarly dawdling, trajectory of progress, and yet also to maintain our selfimage, it will probably do the overwhelming majority of its work in the background, mysteriously keeping things the way we want them without worrying us about how it’s doing it. We may dimly notice the statistically implausible occurrence of hurricanes only in entirely unpopulated regions, of sufficiently deep snow in just the right places to save the lives of reckless mountaineers, and so on – but we will not dwell on it, and quite soon we will take it for granted.

A reasonable question to ask is, well, since even a super-exponentially self-improving AI will always have finite intelligence, might it not at some point create an even more rapidly self-improving system that could supersede it? Indeed it might (I think) – but, from our point of view, so what? If we have succeeded in creating a permanently friendly AI, we can be sure that any “next-generation” AI that it created would also be friendly, and thus (by the previous paragraph’s logic) largely invisible. Thus, from our perspective, there will only be one singularity.

In closing this section I return to my claim that the singularity and the demise of humanity are, in all probability, mutually exclusive. Clearly if our demise precedes the singularity then the singularity cannot occur. Can our demise occur if preceded by the singularity? Almost certainly not, I would say: the interval available for our demise between the development of recursively self-improving AI and the attainment by that AI of extremely thorough ability to protect us (even from, for example, nearby supernovae) will be short. (I exclude here the possibility that the singularity will occur via the creation of AI that is not friendly to us, only because I think humanity’s life expectancy in that scenario is so very short that this is equivalent from our point of view to the singularity not occurring at all.) The “area under the curve” of humanity’s probability of elimination at any time after the singularity is thus very small. I am, of course, discounting here the possibility that even arbitrarily intelligent and powerful systems cannot protect us from truly cosmic events such as the heat death of the Universe, but I agree with Deutsch6 that this is unlikely given the time available.

The Methuselarity: the biogerontological counterpart of the singularity

In a recent interview, Watson was asked what would be the next event in the history of biology that would compare in significance to his and Crick’s discovery of the structure of DNA, and he replied that there would never be one.7 I think he was correct. However, I agree with him only if I am rather careful in defining “biology” as the discovery of features of the living world, and excluding biotechnology, which for present purposes I define as the exploitation of such discoveries. In biotechnology I believe that there will certainly be a counterpart, something that will outstrip in significance every other advance either predating or following it: the Methuselarity.

For almost a decade following my graduation in 1985, I conducted research in artificial intelligence. I switched fields to biogerontology shortly after becoming aware that the defeat of aging was vastly less on biologists’ agenda than I had hitherto presumed. I was not, at that time, aware of the concept of recursively self-improving AI and the singularity, though perhaps I should have been. But even if I had been, I think I would still have made the career change that I did. Why?

Humans are very, very good at adjusting their aspirations to match their expectations. When things get better, people are happy – but if they stay better and show every sign of continuing that way, people become blasé. Conversely, when things get worse people are unhappy, but if they stay worse and show every sign of continuing that way, people become philosophical. This is why, by all measures that have to my knowledge been employed, people in the developed world are on average neither much happier nor much less happy now than they were when things were objectively far worse. This is a good thing in many ways, but in at least one way it is a problem: it dampens our ardour to improve our lives more rapidly. In particular, it depletes the ranks of “unreasonable men” to whom Shaw so astutely credited all progress.8 There are far too few unreasonable men and women in biology, and especially in biogerontology. I am proud to call myself an exception: someone who is comfortable devoting his life to the most important problems of all, even if they appear thoroughly intractable.9 In my youth, I felt I could make the most difference to the world by helping to develop intelligent computers; but when I discovered the truth about biologists’ attitude to aging I knew that I could make even more difference in that field.

Why is aging so important? Aging kills people, yes, but so do quite a few other things – and moreover, life is about quality as well as quantity, and intelligent machines might very greatly improve the quality of life of an awful lot of people, not least by virtue of providing essentially unbounded prosperity for all.

Even if we take into account the fact that aspirations track expectations, such that what really matters is to maintain a good rate of improvement of (objective) quality of life, it is hard to deny that the development of super-intelligent machines will be of astronomical benefit to our lives. But let’s be clear: quantity of life matters too. There is a well-established metric that folds together the quality and quantity benefits of a given technological or other opportunity: it is the “quality-adjusted life year” or QALY.10

Historically, mainstream biogerontologists have been publicly cautious regarding predictions of the biomedical consequences of their work, though this is gradually changing. But even privately, few biogerontologists have viewed aging as amenable to dramatic change: they have been aware that it is a hugely multi-faceted phenomenon, which will yield only incrementally to medical progress if it yields at all. This places them in a difficult position when arguing for the importance of their work relative to other supplicants for biomedical research resources. Yes, there is always a benefit to a QALY, and yes, progress against aging will deliver QALYs – but the force of this argument is diminished by two key factors, namely the probability of success (which biogerontologists cannot provide a conclusive case for being high) and the entrenched ageism in society, which views it as “fair” to deprioritise health care for the elderly. This quandary is well illustrated by the current “Longevity Dividend” initiative, which seeks to focus policy-makers’ minds on the ever-dependable lure of lucre associated with keeping people youthful, rather than on the moral imperative.11

But this is in the process of changing – indeed, of being turned on its head. This is for one reason and one only: it is becoming appreciated that aging may be amenable to comprehensive postponement by regenerative medicine.12,13 And the reason that makes all the difference is because it creates the possibility – indeed, the virtual certainty – of the Methuselarity.

Having tantalised you for so long, I cannot further delay revealing what the Methuselarity actually is. It is the point in our progress against aging at which our rational expectation of the age to which we can expect to live without age-related physiological and cognitive decline goes from the low three digits to infinite. And my use here of the word “point” is almost accurate: this transition will, in my view, take no longer than a few years. Hence the – superficial – similarity to the singularity.

I have set out elsewhere, first qualitatively14 and then quantitatively,15 the details of my reasons for believing that the application of regenerative medicine to aging will deliver this cusp; thus, here I will only summarise. Regenerative medicine, by definition, is the partial or complete restoration of a damaged biological structure to its pre-damaged state. Since aging is the accumulation of damage, it is in theory a legitimate target of regenerative medicine, and success in such a venture would constitute bona fide rejuvenation, the restoration of a lower biological age. (The bulk of my work over the past decade can be summarised as the elaboration of that “theory” into an increasingly detailed and promising project plan for actual implementation16 – but I digress.) This rejuvenation would not be total: some aspects of the damage that constitutes aging would be resistant to these therapies. But not intrinsically resistant: all such damage could in principle be reversed or obviated by sufficiently sophisticated repair-and-maintenance (i.e., regenerative) interventions. Thus arises the concept of a rate of improvement of the comprehensiveness of these rejuvenation therapies that is sufficient to outrun the problem: to deplete the levels of all types of damage more rapidly than they are accumulating, even though intrinsically the damage still present will be progressively more recalcitrant. I have named this required rate of improvement “longevity escape velocity” or LEV.14,15

It is important to understand that LEV is not an unchanging quantity, as it might be if it were a feature of our biology. Rather, it will vary with time – and exactly how it will probably vary is a topic I address in the next section. LEV will, however, remain non-zero for as long as there remain any types of damage that we cannot remove or obviate. Thus, the formal possibility exists that we will at some point achieve LEV but that at some subsequent date our rate of progress against aging will slip back below LEV. However, I have claimed that this will almost certainly not happen: that, once surpassed, LEV will be maintained indefinitely. This claim is essentially equivalent to the claim that the Methuselarity will occur at all: the Methuselarity is, simply, the one and only point in the future at which LEV is achieved.

The singularity and the Methuselarity: some key differences

Having described the singularity and the Methuselarity individually, I now examine how they differ. I hope to communicate that the superficial similarities that they exhibit evaporate rather thoroughly when one delves more deeply.

Perhaps the most important contrast between the singularity and the Methuselarity is the relevance of accelerating change. In the first section of this essay I dealt at some length with the range of trajectories that I think are plausible for the rate of improvement of self-improving artificial intelligence systems – but it will have been apparent that all the trajectories I discussed were accelerating. It might intuitively be presumed that, since aging is a composite of innumerable types of damage that accumulate at different rates and that possess different degrees of difficulty to remove, our efforts to maintain youth in the face of increasing chronological age will require an accelerating rate of progress in our biomedical prowess. But this is not correct.

The central reason why progress need not accelerate is that there is a spectrum not only in the recalcitrance of the various types of damage that constitute aging but also in their rates of accumulation. As biomedical gerontologists, we will always focus on the highest-priority types of damage, the types that are most in danger of killing people. Thus, the most rapidly-accumulating types of damage will preferentially be those against which we most rapidly develop repair-and-maintenance interventions. There will, to be sure, be “spikes” in this distribution – types of damage that accumulate relatively rapidly and are also relatively hard to combat. But we are discussing probabilities here, and if we aggregate the probability distributions of the timeframes on which the various types of damage, with their particular rates of accumulation and degrees of difficulty to combat, are in fact brought under control, the conclusion is clear: we are almost certain to see a progressive and unbroken decline in the rate at which we need to develop new anti-aging therapies once LEV is first achieved. (I do not mean to say that this progression will be absolutely monotonic – but the “wobble” in how rapidly progress needs to occur will be small compared to the margin of error available, i.e. the margin by which the average rate of progress exceeds LEV.) This conclusion is, of course, subject to assumptions concerning the distribution of these types of damage on those two dimensions – but, in the absence of evidence to the contrary, a smooth (log-normal, or similar) distribution must be assumed.

The other fundamental difference between the singularity and the Methuselarity that I wish to highlight is its impact on “the human condition” – on humanity’s experience of the world and its view of itself. I make at this point perhaps my most controversial claim in this essay: that in this regard, the Methuselarity will probably be far more momentous than the singularity.

How can this be? Surely I have just shown that the Methuselarity will be the consequence of only quite modest (and, thereafter, actually decreasing) rates of progress in postponing aging, whereas the singularity will result from what for practical purposes can be regarded as infinite rates of progress in the prowess of computers? Indeed I have. But when we focus on humanity’s experience of the world and its view of itself, what matters is not how rapidly things are changing but how rapidly those changes affect us. In the case of the singularity, I have noted earlier in this essay that if we survive it at all (by virtue of having succeeded in making these ultra-powerful computers permanently friendly to us) then we will move from a shortly-pre-singularity situation in which computers already make our lives rather easy to a situation in which they fade into the background and stay there. I contend that, from our point of view, this is really not much of a difference, psychologically or socially: computers are already far easier to use than the first PCs were, and are getting easier all the time, and the main theme of that progression is that we are increasingly able to treat them as if they were not computers at all. It seems to me that the singularity may well, in this regard, merely be the icing on a cake that will already have been baked.

Compare this to the effect of the Methuselarity on the human condition. In this case we will progressively and smoothly improve our remaining life expectancy as calculated from the rate of accumulation of those types of damage that we cannot yet fix. So far, so boring. But wait – is that the whole story? No, because what will matter is the bottom line, how long people think they’re actually going to live.

These days, people are notoriously bad at predicting how long they’re going to live. There is a strong tendency to expect to live only about as long as one’s parents or grandparents did (just so long as they died of old age, of course).17 This is clearly absurd, given the rapid rise of life expectancies throughout the developed world in the past half-century and the fact that, unlike the previous half-century, that rise has resulted from falling mortality rates at older ages rather than in infancy or childbirth. It persists, I believe, simply because the rise in life expectancy has been rapid only by historical standards: unless one’s paying attention, it’s not been rapid by the standards of progress in technology, so it easily goes unnoticed.

This will not last, however. As the rate of improvement in life expectancy increases, so the disparity between that headline number and the age which someone of any particular age can expect to reach also increases. But here’s the crux: these two quantities do not increase in proportion. In particular, when the rate of improvement of life expectancy reaches one year per year – which, in case you didn’t know, is only a few times faster than is typical in the developed world today18 – the age that one can expect to reach undergoes a dramatic shift, because the risk of dying from age-related causes at any given age suddenly plummets to near zero. And that is (another way of defining) the Methuselarity.

To summarise my view, then: the singularity will take us from a point of considerable computing power that is mostly hidden from our concern to one of astronomical computing power that is just slightly more hidden. The Methuselarity, by contrast, will take us from a point of considerable medical prowess that only modestly benefits how long we can reasonably expect to live, to one of just slightly greater medical prowess that allows us confidence that we can live indefinitely. The contrast is rather stark, I think you will agree.

Epilogue: the Methuselarity and the singularity combined

Those who have followed my work since I began publishing in biogerontology may have noticed a subtle change in the way that I typically describe the Methuselarity’s impact on lifespans. Early on, I used to make probabilistic assertions about future life expectancy; now I make assertions about how soon we will see an individual (or a cohort) achieve a given age.

The reasons for this shift are many; some are down to my improved sense of what does and does not scare people. But an important reason is that my original style of prediction incorporated the implicit assumption that the Methuselarity would occur in the context of a continued smooth, and relatively slow, rate of reduction in our risks of death from causes unrelated to our age. I only belatedly realised that this assumption is unjustified – indeed, absurd. And the singularity is what makes it particularly absurd.

Roughly speaking, we prioritise our effort to avoid particular risks of death on the basis of the relative magnitude of those risks. Things that only have a 0.01% risk per year of killing us may not be considered worth working very hard to avoid, because even multiplied up over a long life they have only a 1% chance of being our cause of death. This immediately tells us that such risks will move altogether nearer to the forefront of our concerns as and when the Methuselarity occurs (or is even widely anticipated), because the greater number of years available to get unlucky means that the risk of these things being our cause of death is elevated. It seems clear that we will work to do something about that – to improve the efficiency with which we develop vaccines, to make our cars safer, and so on. But there would appear to be only so much we can do in that regard: first of all there are things that we really truly can’t do anything about, such as nearby supernovae, and secondly there are quite a few moderately risky activities that quite a lot of us enjoy.

The singularity changes all that. What the singularity will provide is the very rapid reduction to truly minute levels of the risk of death from any cause. You may have thought that my earlier mention of snow reliably saving careless mountaineers was in jest; indeed it was not. Moreover, the residual risk that our rate of improvement of medical therapies against aging will at some point fall below LEV will also essentially disappear with the singularity. (Clearly the possibility also exists that the singularity will precede, and thus bring about, the Methuselarity – but that does not materially alter these considerations.)

One of my “soundbite” predictions concerning the Methuselarity is that the first thousand-year-old is probably less than 20 years younger than the first 150-year-old. The above considerations lead to a supplementary prediction. I think it is abundantly likely that the first million-year-old is less than a year younger than the first thousand-year-old, and the first billion-year-old probably is too.

The singularity and the Methuselarity are superficially similar, but I hope to have communicated in this essay that they are in fact very different concepts. Where they are most similar, however, is in the magnitude of their impact on humanity. The singularity will be a uniquely dramatic change in the trajectory of humanity’s future; the Methuselarity will be a uniquely dramatic change in its perception of its future. Together, they will transform humanity… quite a lot.

References

1. Singularity Institute for Artificial Intelligence. What is the Singularity? http://singinst.org/overview/whatisthesingularity (retrieved 25th August 2008). 2. Ulam S. Tribute to John von Neumann. Bulletin of the American Mathematical Society 1958; 64(3 part 2): 1-49. 3. Moore GE. Cramming more components onto integrated circuits. Electronics 1965; 38(8): no pagination. 4. Kurzweil R. The Singularity Is Near: When Humans Transcend Biology. New York: Penguin, 2006 (ISBN: 0143037889). 5. Vinge V. The Coming Technological Singularity. In: Vision-21: Interdisciplinary Science & Engineering in the Era of CyberSpace, proceedings of a Symposium held at NASA Lewis Research Center (NASA Conference Publication CP-10129), 1993. 6. Deutsch D. The fabric of reality. New York: Penguin, 1998 (ISBN: 014027541X). 7. Weatherall D. Was there life after DNA? Science 2000; 289(5479):554-555. 8. Shaw GB. Maxims for Revolutionists. In: Man and Superman, 1903. 9. de Grey ADNJ. Long live the unreasonable man. Rejuvenation Res 2008; 11(3):541-542. 10. Pliskin JS, Shepard DS, Weinstein MC. Utility Functions for Life Years and Health Status. Operations Research 1980; 28:206-224. 11. Olshansky SJ, Perry D, Miller RA, Butler RN. Pursuing the longevity dividend: scientific goals for an aging world. Ann N Y Acad Sci 2007; 1114:11-13. 12. de Grey ADNJ, Ames BN, Andersen JK, Bartke A, Campisi J, Heward CB, McCarter RJM, Stock G. Time to talk SENS: critiquing the immutability of human aging. Annals NY Acad Sci 2002; 959:452-462. 13. de Grey ADNJ. A strategy for postponing aging indefinitely. Stud Health Technol Inform 2005; 118:209-219. 14. de Grey ADNJ. Escape velocity: why the prospect of extreme human life extension matters now. PLoS Biol 2004; 2(6):723-726. 15. Phoenix CR, de Grey ADNJ. A model of aging as accumulated damage matches observed mortality patterns and predicts the life-extending effects of prospective interventions. AGE 2007; 29(4):133-189. 16. de Grey ADNJ, Rae M. Ending Aging: The rejuvenation biotechnologies that could reverse human aging in our lifetime. New York, NY: St. Martin’s Press, 2007, 416pp, hardcover (ISBN 0-312-36706-6). 17. Banks J, Emmerson C, Oldfield Z. Not so brief lives: longevity expectations and wellbeing in retirement. In: Seven Ages of Man and Woman (Stewart I and Vaitilingam R, eds.), Swindon: Economic and Social Research Council, 2004, pp. 28-31. 18. Oeppen J, Vaupel JW. Broken limits to life expectancy. Science 2002;296(5570):1029-1031.