Cyborg Science: Ultrathin Nanowires can Monitor and Influence What Goes on Inside Your Brain

This is a short article from the humanity+ website called Cyborg Science: Ultrathin Nanowires can Monitor and Influence What Goes on Inside Your Brain.  This is some revolutionary brain science.  It doesn’t take much imagination to understand that this kind of tech will open worlds of possibilities in brain science.

Cyborg Science: Ultrathin Nanowires can Monitor and Influence What Goes on Inside Your Brain

F

No longer just fantastical fodder for sci-fi buffs, cyborg technology is bringing us tangible progress toward real-life electronic skin, prosthetics and ultra-flexible circuits. Now taking this human-machine concept to an unprecedented level, pioneering scientists are working on the seamless marriage between electronics and brain signaling with the potential to transform our understanding of how the brain works — and how to treat its most devastating diseases.

“By focusing on the nanoelectronic connections between cells, we can do things no one has done before,” says Charles M. Lieber, Ph.D. “We’re really going into a new size regime for not only the device that records or stimulates cellular activity, but also for the whole circuit. We can make it really look and behave like smart, soft biological material, and integrate it with cells and cellular networks at the whole-tissue level. This could get around a lot of serious health problems in neurodegenerative diseases in the future.”

These disorders, such as Parkinson’s, that involve malfunctioning nerve cells can lead to difficulty with the most mundane and essential movements that most of us take for granted: walking, talking, eating and swallowing.

Scientists are working furiously to get to the bottom of neurological disorders. But they involve the body’s most complex organ — the brain — which is largely inaccessible to detailed, real-time scrutiny. This inability to see what’s happening in the body’s command center hinders the development of effective treatments for diseases that stem from it.

By using nanoelectronics, it could become possible for scientists to peer for the first time inside cells, see what’s going wrong in real time and ideally set them on a functional path again.

Screen Shot 2014-08-12 at 11.05.41 AM

For the past several years, Lieber has been working to dramatically shrink cyborg science to a level that’s thousands of times smaller and more flexible than other bioelectronic research efforts. His team has made ultrathin nanowires that can monitor and influence what goes on inside cells. Using these wires, they have built ultra-flexible, 3-D mesh scaffolding with hundreds of addressable electronic units, and they have grown living tissue on it. They have also developed the tiniest electronic probe ever that can record even the fastest signaling between cells.

Rapid-fire cell signaling controls all of the body’s movements, including breathing and swallowing, which are affected in some neurodegenerative diseases. And it’s at this level where the promise of Lieber’s most recent work enters the picture.

In one of the lab’s latest directions, Lieber’s team is figuring out how to inject their tiny, ultraflexible electronics into the brain and allow them to become fully integrated with the existing biological web of neurons. They’re currently in the early stages of the project and are working with rat models.

“It’s hard to say where this work will take us,” he says. “But in the end, I believe our unique approach will take us on a path to do something really revolutionary.”

Their presentation is taking place at the 248th National Meeting & Exposition of the American Chemical Society (ACS), the world’s largest scientific society. The meeting features nearly 12,000 presentations on a wide range of science topics and is being held here through Thursday.

Lieber acknowledges funding from the U.S. Department of Defense, the National Institutes of Health and the U.S. Air Force.

 

This article can also be found here.

Building a Better You? The Era of Trans-Human Technology by Christopher Phillips

I just gotta say, if you’re going to do a pro-transhumanism piece, maybe you should refrain from using the most creepy-ass graphic you can find.  Am I right?

Anyway, creepy image aside, this is an interesting little article from LiveScience by Christopher Phillips called Building a Better You? The Era of Trans-Human Technology.  The article is about augmented reality and prosthetics how intimate our technology has become.

Building a Better You? The Era of Trans-Human Technology (Op-Ed)

Expertvoices_02_ls_v2[2]
A part-human, part-robot man, in image showing his face.
Some scientists imagine cybernetic parts to replace cancerous limbs and aging hearts, radically increasing longevity.
Credit: Lobke Peers | Shutterstock

This article can also be found on the LiveScience website at http://www.livescience.com/45872-transhuman-technology.html

Transhumanism, medical technology and slippery slopes from the NCBI

This article (Transhumanism, medical technology and slippery slopes from the NCBI) explores transhumanism in the medical industry.  I thought it was bit negatively biased, but the sources are good and disagreement doesn’t equate to invalidation in my book so here it is…

Abstract

In this article, transhumanism is considered to be a quasi‐medical ideology that seeks to promote a variety of therapeutic and human‐enhancing aims. Moderate conceptions are distinguished from strong conceptions of transhumanism and the strong conceptions were found to be more problematic than the moderate ones. A particular critique of Boström’s defence of transhumanism is presented. Various forms of slippery slope arguments that may be used for and against transhumanism are discussed and one particular criticism, moral arbitrariness, that undermines both weak and strong transhumanism is highlighted.

No less a figure than Francis Fukuyama1 recently labelled transhumanism as “the world’s most dangerous idea”. Such an eye‐catching condemnation almost certainly denotes an issue worthy of serious consideration, especially given the centrality of biomedical technology to its aims. In this article, we consider transhumanism as an ideology that seeks to evangelise its human‐enhancing aims. Given that transhumanism covers a broad range of ideas, we distinguish moderate conceptions from strong ones and find the strong conceptions more problematic than the moderate ones. We also offer a critique of Boström’s2 position published in this journal. We discuss various forms of slippery slope arguments that may be used for and against transhumanism and highlight one particular criticism, moral arbitrariness, which undermines both forms of transhumanism.

What is transhumanism?

At the beginning of the 21st century, we find ourselves in strange times; facts and fantasy find their way together in ethics, medicine and philosophy journals and websites.2,3,4 Key sites of contestation include the very idea of human nature, the place of embodiment within medical ethics and, more specifically, the systematic reflections on the place of medical and other technologies in conceptions of the good life. A reflection of this situation is captured by Dyens5 who writes,

What we are witnessing today is the very convergence of environments, systems, bodies, and ontology toward and into the intelligent matter. We can no longer speak of the human condition or even of the posthuman condition. We must now refer to the intelligent condition.

We wish to evaluate the contents of such dialogue and to discuss, if not the death of human nature, then at least its dislocation and derogation in the thinkers who label themselves transhumanists.

One difficulty for critics of transhumanism is that a wide range of views fall under its label.6 Not merely are there idiosyncrasies of individual academics, but there does not seem to exist an absolutely agreed on definition of transhumanism. One can find not only substantial differences between key authors2,3,4,7,8 and the disparate disciplinary nuances of their exhortations, but also subtle variations of its chief representatives in the offerings of people. It is to be expected that any ideology transforms over time and not least of all in response to internal and external criticism. Yet, the transhumanism critic faces a further problem of identifying a robust target that stays still sufficiently long to locate it properly in these web‐driven days without constructing a “straw man” to knock over with the slightest philosophical breeze. For the purposes of targeting a sufficiently substantial target, we identify the writings of one of its clearest and intellectually robust proponents, the Oxford philosopher and cofounder of the World Transhumanist Association , Nick Boström,2 who has written recently in these pages of transhumanism’s desire to make good the “half‐baked” project3 that is human nature.

Before specifically evaluating Boström’s position, it is best first to offer a global definition for transhumanism and then to locate it among the range of views that fall under the heading. One of the most celebrated advocates of transhumanism is Max More, whose website reads “no more gods, nor more faith, no more timid holding back. The future belongs to posthumanity”.8 We will have a clearer idea then of the kinds of position transhumanism stands in direct opposition to. Specifically, More8 asserts,

“Transhumanism” is a blanket term given to the school of thought that refuses to accept traditional human limitations such as death, disease and other biological frailties. Transhumans are typically interested in a variety of futurist topics, including space migration, mind uploading and cryonic suspension. Transhumans are also extremely interested in more immediate subjects such as bio‐ and nano‐technology, computers and neurology. Transhumans deplore the standard paradigms that attempt to render our world comfortable at the sake of human fulfilment.8

Strong transhumanism advocates see themselves engaged in a project, the purpose of which is to overcome the limits of human nature. Whether this is the foundational claim, or merely the central claim, is not clear. These limitations—one may describe them simply as features of human nature, as the idea of labelling them as limitations is itself to take up a negative stance towards them—concern appearance, human sensory capacities, intelligence, lifespan and vulnerability to harm. According to the extreme transhumanism programme, technology can be used to vastly enhance a person’s intelligence; to tailor their appearance to what they desire; to lengthen their lifespan, perhaps to immortality; and to reduce vastly their vulnerability to harm. This can be done by exploitation of various kinds of technology, including genetic engineering, cybernetics, computation and nanotechnology. Whether technology will continue to progress sufficiently, and sufficiently predictably, is of course quite another matter.

Advocates of transhumanism argue that recruitment or deployment of these various types of technology can produce people who are intelligent and immortal, but who are not members of the species Homo sapiens. Their species type will be ambiguous—for example, if they are cyborgs (part human, part machine)—or, if they are wholly machines, they will lack any common genetic features with human beings. A legion of labels covers this possibility; we find in Dyen’s5 recently translated book a variety of cultural bodies, perhaps the most extreme being cyberpunks:

…a profound misalignment between existence and its manifestation. This misalignment produces bodies so transformed, so dissociated, and so asynchronized, that their only outcome is gross mutation. Cyberpunk bodies are horrible, strange and mysterious (think of Alien, Robocop, Terminator, etc.), for they have no real attachment to any biological structure. (p 75)

Perhaps a reasonable claim is encapsulated in the idea that such entities will be posthuman. The extent to which posthuman might be synonymous with transhumanism is not clear. Extreme transhumanists strongly support such developments.

At the other end of transhumanism is a much less radical project, which is simply the project to use technology to enhance human characteristics—for example, beauty, lifespan and resistance to disease. In this less extreme project, there is no necessary aspiration to shed human nature or human genetic constitution, just to augment it with technology where possible and where desired by the person.

Who is for transhumanism?

At present it seems to be a movement based mostly in North America, although there are some adherents from the UK. Among its most intellectually sophisticated proponents is Nick Boström. Perhaps the most outspoken supporters of transhumanism are people who see it simply as an issue of free choice. It may simply be the case that moderate transhumanists are libertarians at the core. In that case, transhumanism merely supplies an overt technological dimension to libertarianism. If certain technological developments are possible, which they as competent choosers desire, then they should not be prevented from acquiring the technologically driven enhancements they desire. One obvious line of criticism here may be in relation to the inequality that necessarily arises with respect to scarce goods and services distributed by market mechanisms.9 We will elaborate this point in the Transhumanism and slippery slopes section.

So, one group of people for the transhumanism project sees it simply as a way of improving their own life by their own standards of what counts as an improvement. For example, they may choose to purchase an intervention, which will make them more intelligent or even extend their life by 200 years. (Of course it is not self‐evident that everyone would regard this as an improvement.) A less vociferous group sees the transhumanism project as not so much bound to the expansion of autonomy (notwithstanding our criticism that will necessarily be effected only in the sphere of economic consumer choice) as one that has the potential to improve the quality of life for humans in general. For this group, the relationship between transhumanism and the general good is what makes transhumanism worthy of support. For the other group, the worth of transhumanism is in its connection with their own conception of what is good for them, with the extension of their personal life choices.

What can be said in its favour?

Of the many points for transhumanism, we note three. Firstly, transhumanism seems to facilitate two aims that have commanded much support. The use of technology to improve humans is something we pretty much take for granted. Much good has been achieved with low‐level technology in the promotion of public health. The construction of sewage systems, clean water supplies, etc, is all work to facilitate this aim and is surely good work, work which aims at, and in this case achieves, a good. Moreover, a large portion of the modern biomedical enterprise is another example of a project that aims at generating this good too.

Secondly, proponents of transhumanism say it presents an opportunity to plan the future development of human beings, the species Homo sapiens. Instead of this being left to the evolutionary process and its exploitation of random mutations, transhumanism presents a hitherto unavailable option: tailoring the development of human beings to an ideal blueprint. Precisely whose ideal gets blueprinted is a point that we deal with later.

Thirdly, in the spirit of work in ethics that makes use of a technical idea of personhood, the view that moral status is independent of membership of a particular species (or indeed any biological species), transhumanism presents a way in which moral status can be shown to be bound to intellectual capacity rather than to human embodiment as such or human vulnerability in the capacity of embodiment (Harris, 1985).9a

What can be said against it?

Critics point to consequences of transhumanism, which they find unpalatable. One possible consequence feared by some commentators is that, in effect, transhumanism will lead to the existence of two distinct types of being, the human and the posthuman. The human may be incapable of breeding with the posthuman and will be seen as having a much lower moral standing. Given that, as Buchanan et al9 note, much moral progress, in the West at least, is founded on the category of the human in terms of rights claims, if we no longer have a common humanity, what rights, if any, ought to be enjoyed by transhumans? This can be viewed either as a criticism (we poor humans are no longer at the top of the evolutionary tree) or simply as a critical concern that invites further argumentation. We shall return to this idea in the final section, by way of identifying a deeper problem with the open‐endedness of transhumanism that builds on this recognition.

In the same vein, critics may argue that transhumanism will increase inequalities between the rich and the poor. The rich can afford to make use of transhumanism, but the poor will not be able to. Indeed, we may come to think of such people as deficient, failing to achieve a new heightened level of normal functioning.9 In the opposing direction, critical observers may say that transhumanism is, in reality, an irrelevance, as very few will be able to use the technological developments even if they ever manifest themselves. A further possibility is that transhumanism could lead to the extinction of humans and posthumans, for things are just as likely to turn out for the worse as for the better (eg, those for precautionary principle).

One of the deeper philosophical objections comes from a very traditional source. Like all such utopian visions, transhumanism rests on some conception of good. So just as humanism is founded on the idea that humans are the measure of all things and that their fulfilment is to be found in the powers of reason extolled and extended in culture and education, so too transhumanism has a vision of the good, albeit one loosely shared. For one group of transhumanists, the good is the expansion of personal choice. Given that autonomy is so widely valued, why not remove the barriers to enhanced autonomy by various technological interventions? Theological critics especially, but not exclusively, object to what they see as the imperialising of autonomy. Elshtain10 lists the three c’s: choice, consent and control. These, she asserts, are the dominant motifs of modern American culture. And there is, of course, an army of communitarians (Bellah et al,10a MacIntyre,10b Sandel,10c Taylor10d and Walzer10e) ready to provide support in general moral and political matters to this line of criticism. One extension of this line of transhumanism thinking is to align the valorisation of autonomy with economic rationality, for we may as well be motivated by economic concerns as by moral ones where the market is concerned. As noted earlier, only a small minority may be able to access this technology (despite Boström’s naive disclaimer for democratic transhumanism), so the technology necessary for transhumanist transformations is unlikely to be prioritised in the context of artificially scarce public health resources. One other population attracted to transhumanism will be the elite sports world, fuelled by the media commercialisation complex—where mere mortals will get no more than a glimpse of the transhuman in competitive physical contexts. There may be something of a double‐binding character to this consumerism. The poor, at once removed from the possibility of such augmentation, pay (per view) for the pleasure of their envy.

If we argue against the idea that the good cannot be equated with what people choose simpliciter, it does not follow that we need to reject the requisite medical technology outright. Against the more moderate transhumanists, who see transhumanism as an opportunity to enhance the general quality of life for humans, it is nevertheless true that their position presupposes some conception of the good. What kind of traits is best engineered into humans: disease resistance or parabolic hearing? And unsurprisingly, transhumanists disagree about precisely what “objective goods” to select for installation into humans or posthumans.

Some radical critics of transhumanism see it as a threat to morality itself.1,11 This is because they see morality as necessarily connected to the kind of vulnerability that accompanies human nature. Think of the idea of human rights and the power this has had in voicing concern about the plight of especially vulnerable human beings. As noted earlier a transhumanist may be thought to be beyond humanity and as neither enjoying its rights nor its obligations. Why would a transhuman be moved by appeals to human solidarity? Once the prospect of posthumanism emerges, the whole of morality is thus threatened because the existence of human nature itself is under threat.

One further objection voiced by Habermas11 is that interfering with the process of human conception, and by implication human constitution, deprives humans of the “naturalness which so far has been a part of the taken‐for‐granted background of our self‐understanding as a species” and “Getting used to having human life biotechnologically at the disposal of our contingent preferences cannot help but change our normative self‐understanding” (p 72).

On this account, our self‐understanding would include, for example, our essential vulnerability to disease, ageing and death. Suppose the strong transhumanism project is realised. We are no longer thus vulnerable: immortality is a real prospect. Nevertheless, conceptual caution must be exercised here—even transhumanists will be susceptible in the manner that Hobbes12 noted. Even the strongest are vulnerable in their sleep. But the kind of vulnerability transhumanism seeks to overcome is of the internal kind (not Hobbes’s external threats). We are reminded of Woody Allen’s famous remark that he wanted to become immortal, not by doing great deeds but simply by not dying. This will result in a radical change in our self‐understanding, which has inescapably normative elements to it that need to be challenged. Most radically, this change in self‐understanding may take the form of a change in what we view as a good life. Hitherto a human life, this would have been assumed to be finite. Transhumanists suggest that even now this may change with appropriate technology and the “right” motivation.

Do the changes in self‐understanding presented by transhumanists (and genetic manipulation) necessarily have to represent a change for the worse? As discussed earlier, it may be that the technology that generates the possibility of transhumanism can be used for the good of humans—for example, to promote immunity to disease or to increase quality of life. Is there really an intrinsic connection between acquisition of the capacity to bring about transhumanism and moral decline? Perhaps Habermas’s point is that moral decline is simply more likely to occur once radical enhancement technologies are adopted as a practice that is not intrinsically evil or morally objectionable. But how can this be known in advance? This raises the spectre of slippery slope arguments.

But before we discuss such slopes, let us note that the kind of approach (whether characterised as closed‐minded or sceptical) Boström seems to dislike is one he calls speculative. He dismisses as speculative the idea that offspring may think themselves lesser beings, commodifications of their parents’ egoistic desires (or some such). None the less, having pointed out the lack of epistemological standing of such speculation, he invites us to his own apparently more congenial position:

We might speculate, instead, that germ‐line enhancements will lead to more love and parental dedication. Some mothers and fathers might find it easier to love a child who, thanks to enhancements, is bright, beautiful, healthy, and happy. The practice of germ‐line enhancement might lead to better treatment of people with disabilities, because a general demystification of the genetic contributions to human traits could make it clearer that people with disabilities are not to blame for their disabilities and a decreased incidence of some disabilities could lead to more assistance being available for the remaining affected people to enable them to live full, unrestricted lives through various technological and social supports. Speculating about possible psychological or cultural effects of germ‐line engineering can therefore cut both ways. Good consequences no less than bad ones are possible. In the absence of sound arguments for the view that the negative consequences would predominate, such speculations provide no reason against moving forward with the technology. Ruminations over hypothetical side effects may serve to make us aware of things that could go wrong so that we can be on the lookout for untoward developments. By being aware of the perils in advance, we will be in a better position to take preventive countermeasures. (Boström, 2003, p 498)

Following Boström’s3 speculation then, what grounds for hope exist? Beyond speculation, what kinds of arguments does Boström offer? Well, most people may think that the burden of proof should fall to the transhumanists. Not so, according to Boström. Assuming the likely enormous benefits, he turns the tables on this intuition—not by argument but by skilful rhetorical speculation. We quote for accuracy of representation (emphasis added):

Only after a fair comparison of the risks with the likely positive consequences can any conclusion based on a cost‐benefit analysis be reached. In the case of germ‐line enhancements, the potential gains are enormous. Only rarely, however, are the potential gains discussed, perhaps because they are too obvious to be of much theoretical interest. By contrast, uncovering subtle and non‐trivial ways in which manipulating our genome could undermine deep values is philosophically a lot more challenging. But if we think about it, we recognize that the promise of genetic enhancements is anything but insignificant. Being free from severe genetic diseases would be good, as would having a mind that can learn more quickly, or having a more robust immune system. Healthier, wittier, happier people may be able to reach new levels culturally. To achieve a significant enhancement of human capacities would be to embark on the transhuman journey of exploration of some of the modes of being that are not accessible to us as we are currently constituted, possibly to discover and to instantiate important new values. On an even more basic level, genetic engineering holds great potential for alleviating unnecessary human suffering. Every day that the introduction of effective human genetic enhancement is delayed is a day of lost individual and cultural potential, and a day of torment for many unfortunate sufferers of diseases that could have been prevented. Seen in this light,proponents of a ban or a moratorium on human genetic modification must take on a heavy burden of proof in order to have the balance of reason tilt in their favor. (Bostrom,3 pp 498–9).

Now one way in which such a balance of reason may be had is in the idea of a slippery slope argument. We now turn to that.

Transhumanism and slippery slopes

A proper assessment of transhumanism requires consideration of the objection that acceptance of the main claims of transhumanism will place us on a slippery slope. Yet, paradoxically, both proponents and detractors of transhumanism may exploit slippery slope arguments in support of their position. It is necessary therefore to set out the various arguments that fall under this title so that we can better characterise arguments for and against transhumanism. We shall therefore examine three such attempts13,14,15 but argue that the arbitrary slippery slope15 may undermine all versions of transhumanists, although not every enhancement proposed by them.

Schauer13 offers the following essentialist analysis of slippery slope arguments. A “pure” slippery slope is one where a “particular act, seemingly innocuous when taken in isolation, may yet lead to a future host of similar but increasingly pernicious events”. Abortion and euthanasia are classic candidates for slippery slope arguments in public discussion and policy making. Against this, however, there is no reason to suppose that the future events (acts or policies) down the slope need to display similarities—indeed we may propose that they will lead to a whole range of different, although equally unwished for, consequences. The vast array of enhancements proposed by transhumanists would not be captured under this conception of a slippery slope because of their heterogeneity. Moreover, as Sternglantz16 notes, Schauer undermines his case when arguing that greater linguistic precision undermines the slippery slope and that indirect consequences often bolster slippery slope arguments. It is as if the slippery slopes would cease in a world with greater linguistic precision or when applied only to direct consequences. These views do not find support in the later literature. Schauer does, however, identify three non‐slippery slope arguments where the advocate’s aim is (a) to show that the bottom of a proposed slope has been arrived at; (b) to show that a principle is excessively broad; (c) to highlight how granting authority to X will make it more likely that an undesirable outcome will be achieved. Clearly (a) cannot properly be called a slippery slope argument in itself, while (b) and (c) often have some role in slippery slope arguments.

The excessive breadth principle can be subsumed under Bernard Williams’s distinction between slippery slope arguments with (a) horrible results and (b) arbitrary results. According to Williams, the nature of the bottom of the slope allows us to determine which category a particular argument falls under. Clearly, the most common form is the slippery slope to a horrible result argument. Walton14 goes further in distinguishing three types: (a) thin end of the wedge or precedent arguments; (b) Sorites arguments; and (c) domino‐effect arguments. Importantly, these arguments may be used both by antagonists and also by advocates of transhumanism. We shall consider the advocates of transhumanism first.

In the thin end of the wedge slippery slopes, allowing P will set a precedent that will allow further precedents (Pn) taken to an unspecified problematic terminus. Is it necessary that the end point has to be bad? Of course this is the typical linguistic meaning of the phrase “slippery slopes”. Nevertheless, we may turn the tables here and argue that [the] slopes may be viewed positively too.17 Perhaps a new phrase will be required to capture ineluctable slides (ascents?) to such end points. This would be somewhat analogous to the ideas of vicious and virtuous cycles. So transhumanists could argue that, once the artificial generation of life through technologies of in vitro fertilisation was thought permissible, the slope was foreseeable, and transhumanists are doing no more than extending that life‐creating and fashioning impulse.

In Sorites arguments, the inability to draw clear distinctions has the effect that allowing P will not allow us to consistently deny Pn. This slope follows the form of the Sorites paradox, where taking a grain of sand from a heap does not prevent our recognising or describing the heap as such, even though it is not identical with its former state. At the heart of the problem with such arguments is the idea of conceptual vagueness. Yet the logical distinctions used by philosophers are often inapplicable in the real world.15,18 Transhumanists may well seize on this vagueness and apply a Sorites argument as follows: as therapeutic interventions are currently morally permissible, and there is no clear distinction between treatment and enhancement, enhancement interventions are morally permissible too. They may ask whether we can really distinguish categorically between the added functionality of certain prosthetic devices and sonar senses.

In domino‐effect arguments, the domino conception of the slippery slope, we have what others often refer to as a causal slippery slope.19 Once P is allowed, a causal chain will be effected allowing Pn and so on to follow, which will precipitate increasingly bad consequences.

In what ways can slippery slope arguments be used against transhumanism? What is wrong with transhumanism? Or, better, is there a point at which we can say transhumanism is objectionable? One particular strategy adopted by proponents of transhumanism falls clearly under the aspect of the thin end of the wedge conception of the slippery slope. Although some aspects of their ideology seem aimed at unqualified goods, there seems to be no limit to the aspirations of transhumanism as they cite the powers of other animals and substances as potential modifications for the transhumanist. Although we can admire the sonic capacities of the bat, the elastic strength of lizards’ tongues and the endurability of Kevlar in contrast with traditional construction materials used in the body, their transplantation into humans is, to coin Kass’s celebrated label, “repugnant” (Kass, 1997).19a

Although not all transhumanists would support such extreme enhancements (if that is indeed what they are), less radical advocates use justifications that are based on therapeutic lines up front with the more Promethean aims less explicitly advertised. We can find many examples of this manoeuvre. Take, for example, the Cognitive Enhancement Research Institute in California. Prominently displayed on its website front page (http://www.ceri.com/) we read, “Do you know somebody with Alzheimer’s disease? Click to see the latest research breakthrough.” The mode is simple: treatment by front entrance, enhancement by the back door. Borgmann,20 in his discussion of the uses of technology in modern society, observed precisely this argumentative strategy more than 20 years ago:

The main goal of these programs seems to be the domination of nature. But we must be more precise. The desire to dominate does not just spring from a lust of power, from sheer human imperialism. It is from the start connected with the aim of liberating humanity from disease, hunger, and toil and enriching life with learning, art and athletics.

Who would want to deny the powers of viral diseases that can be genetically treated? Would we want to draw the line at the transplantation of non‐human capacities (sonar path finding)? Or at in vivo fibre optic communications backbone or anti‐degeneration powers? (These would have to be non‐human by hypothesis). Or should we consider the scope of technological enhancements that one chief transhumanist, Natasha Vita More21, propounds:

A transhuman is an evolutionary stage from being exclusively biological to becoming post‐biological. Post‐biological means a continuous shedding of our biology and merging with machines. (…) The body, as we transform ourselves over time, will take on different types of appearances and designs and materials. (…)

For hiking a mountain, I’d like extended leg strength, stamina, a skin‐sheath to protect me from damaging environmental aspects, self‐moisturizing, cool‐down capability, extended hearing and augmented vision (Network of sonar sensors depicts data through solid mass and map images onto visual field. Overlay window shifts spectrum frequencies. Visual scratch pad relays mental ideas to visual recognition bots. Global Satellite interface at micro‐zoom range).

For a party, I’d like an eclectic look ‐ a glistening bronze skin with emerald green highlights, enhanced height to tower above other people, a sophisticated internal sound system so that I could alter the music to suit my own taste, memory enhance device, emotional‐select for feel‐good people so I wouldn’t get dragged into anyone’s inappropriate conversations. And parabolic hearing so that I could listen in on conversations across the room if the one I was currently in started winding down.

Notwithstanding the difficulty of bringing together transhumanism under one movement, the sheer variety of proposals merely contained within Vita More’s catalogue means that we cannot determinately point to a precise station at which we can say, “Here, this is the end we said things would naturally progress to.” But does this pose a problem? Well, it certainly makes it difficult to specify exactly a “horrible result” that is supposed to be at the bottom of the slope. Equally, it is extremely difficult to say that if we allow precedent X, it will allow practices Y or Z to follow as it is not clear how these practices Y or Z are (if at all) connected with the precedent X. So it is not clear that a form of precedent‐setting slippery slope can be strictly used in every case against transhumanism, although it may be applicable in some.

Nevertheless, we contend, in contrast with Boström that the burden of proof would fall to the transhumanist. Consider in this light, a Sorites‐type slope. The transhumanist would have to show how the relationship between the therapeutic practices and the enhancements are indeed transitive. We know night from day without being able to specify exactly when this occurs. So simply because we cannot determine a precise distinction between, say, genetic treatments G1, G2 and G3, and transhumanism enhancements T1, T2 and so on, it does not follow that there are no important moral distinctions between G1 and T20. According to Williams,15 this kind of indeterminacy arises because of the conceptual vagueness of certain terms. Yet, the indeterminacy of so open a predicate “heap” is not equally true of “therapy” or “enhancement”. The latitude they permit is nowhere near so wide.

Instead of objecting to Pn on the grounds that Pn is morally objectionable (ie, to depict a horrible result), we may instead, after Williams, object that the slide from P to Pn is simply morally arbitrary, when it ought not to be. Here, we may say, without specifying a horrible result, that it would be difficult to know what, in principle, can ever be objected to. And this is, quite literally, what is troublesome. It seems to us that this criticism applies to all categories of transhumanism, although not necessarily to all enhancements proposed by them. Clearly, the somewhat loose identity of the movement—and the variations between strong and moderate versions—makes it difficult to sustain this argument unequivocally. Still the transhumanist may be justified in asking, “What is wrong with arbitrariness?” Let us consider one brief example. In aspects of our lives, as a widely shared intuition, we may think that in the absence of good reasons, we ought not to discriminate among people arbitrarily. Healthcare may be considered to be precisely one such case. Given the ever‐increasing demand for public healthcare services and products, it may be argued that access to them typically ought to be governed by publicly disputable criteria such as clinical need or potential benefit, as opposed to individual choices of an arbitrary or subjective nature. And nothing in transhumanism seems to allow for such objective dispute, let alone prioritisation. Of course, transhumanists such as More find no such disquietude. His phrase “No more timidity” is a typical token of transhumanist slogans. We applaud advances in therapeutic medical technologies such as those from new genetically based organ regeneration to more familiar prosthetic devices. Here the ends of the interventions are clearly medically defined and the means regulated closely. This is what prevents transhumanists from adopting a Sorites‐type slippery slope. But in the absence of a telos, of clearly and substantively specified ends (beyond the mere banner of enhancement), we suggest that the public, medical professionals and bioethicists alike ought to resist the potentially open‐ended transformations of human nature. For if all transformations are in principle enchancements, then surely none are. The very application of the word may become redundant. Thus it seems that one strong argument against transhumanism generally—the arbitrary slippery slope—presents a challenge to transhumanism, to show that all of what are described as transhumanist enhancements are imbued with positive normative force and are not merely technological extensions of libertarianism, whose conception of the good is merely an extension of individual choice and consumption.

Limits of transhumanist arguments for medical technology and practice

Already, we have seen the misuse of a host of therapeutically designed drugs used by non‐therapeutic populations for enhancements. Consider the non‐therapeutic use of human growth hormone in non‐clinical populations. Such is the present perception of height as a positional good in society that Cuttler et al22 report that the proportion of doctors who recommended human growth hormone treatment of short non‐growth hormone deficient children ranged from 1% to 74%. This is despite its contrary indication in professional literature, such as that of the Pediatric Endocrine Society, and considerable doubt about its efficacy.23,24 Moreover, evidence supports the view that recreational body builders will use the technology, given the evidence of their use or misuse of steroids and other biotechnological products.25,26 Finally, in the sphere of elite sport, which so valorises embodied capacities that may be found elsewhere in greater degree, precision and sophistication in the animal kingdom or in the computer laboratory, biomedical enhancers may latch onto the genetically determined capacities and adopt or adapt them for their own commercially driven ends.

The arguments and examples presented here do no more than to warn us of the enhancement ideologies, such as transhumanism, which seek to predicate their futuristic agendas on the bedrock of medical technological progress aimed at therapeutic ends and are secondarily extended to loosely defined enhancement ends. In discussion and in bioethical literatures, the future of genetic engineering is often challenged by slippery slope arguments that lead policy and practice to a horrible result. Instead of pointing to the undesirability of the ends to which transhumanism leads, we have pointed out the failure to specify their telos beyond the slogans of “overcoming timidity” or Boström’s3 exhortation that the passive acceptance of ageing is an example of “reckless and dangerous barriers to urgently needed action in the biomedical sphere”.

We propose that greater care be taken to distinguish the slippery slope arguments that are used in the emotionally loaded exhortations of transhumanism to come to a more judicious perspective on the technologically driven agenda for biomedical enhancement. Perhaps we would do better to consider those other all‐too‐human frailties such as violent aggression, wanton self‐harming and so on, before we turn too readily to the richer imaginations of biomedical technologists.

Footnotes

Competing interests: None.

References

1. Fukuyama F. Transhumanism. Foreign Policy 2004. 12442–44.44
2. Boström N. The fable of the dragon tyrant. J Med Ethics 2005. 31231–237.237 [PMC free article] [PubMed]
3. Boström N. Human genetic enhancements: a transhumanist perspective. J Value Inquiry 2004. 37493–506.506[PubMed]
4. Boström N. Transhumanist values. http://www.nickBostr öm com/ethics/values.h tml (accessed 19 May 2005).
5. Dyens O. The evolution of man: technology takes over. In: Trans Bibbee EJ, Dyens O, eds. Metal and flesh.L. ondon: MIT Press, 2001
6. World Transhumanist Association http://www.transhumanism.org/index.php/WTA/index/ (accessed 7 Apr 2006)
7. More M. Transhumanism: towards a futurist philosophy. http://www.maxmore.com/transhum.htm 1996 (accessed 20 Jul 2005)
8. More M. http://www.mactonnies.com/trans.html 2005 (accessed 13 Jul 2005)
9. Buchanan A, Brock D W, Daniels N. et alFrom chance to choice: genetics and justice. Cambridge: Cambridge University Press, 2000
9a. Harris J. The Value of Life. London: Routledge. 1985
10. Elshtain B. ed. The body and the quest for control. Is human nature obsolete?. Cambridge, MA: MIT Press, 2004. 155–174.174
10a. Bellah R N. et alHabits of the heart: individualism and commitment in American life. Berkeley: University of California Press. 1996
10b. MacIntyre A C. After virtue. (2nd ed) London: Duckworth. 1985
10c. Sandel M. Liberalism and the limits of justice. Cambridge: Cambridge University Press. 1982
10d. Taylor C. The ethics of authenticity. Boston: Harvard University Press. 1982
10e. Walzer M. Spheres of Justice. New York: Basic Books. 1983
11. Habermas J. The future of human nature. Cambridge: Polity, 2003
12. Hobbes T. In: Oakeshott M, ed. Leviathan. London: MacMillan, 1962
13. Schauer F. Slippery slopes. Harvard Law Rev 1985. 99361–383.383
14. Walton D N. Slippery slope arguments. Oxford: Clarendon, 1992
15. Williams B A O. Which slopes are slippery. In: Lockwood M, ed. Making sense of humanity. Cambridge: Cambridge University Press, 1995. 213–223.223
16. Sternglantz R. Raining on the parade of horribles: of slippery slopes, faux slopes, and Justice Scalia’s dissent in Lawrence v Texas, University of Pennsylvania Law Review, 153. Univ Pa Law Rev 2005. 1531097–1120.1120
17. Schubert L. Ethical implications of pharmacogenetics‐do slippery slope arguments matter? Bioethics 2004.18361–378.378 [PubMed]
18. Lamb D. Down the slippery slope. London: Croom Helm, 1988
19. Den Hartogh G. The slippery slope argument. In: Kuhse H, Singer P, eds. Companion to bioethics. Oxford: Blackwell, 2005. 280–290.290
19a. Kass L. The wisdom of repugnance. New Republic June 2, pp17–26 [PubMed]
20. Borgmann A. Technology and the character of everyday life. Chicago: University of Chicago Press, 1984
21. Vita More N. Who are transhumans? http://www.transhumanist.biz/interviews.htm, 2000 (accessed 7 Apr 2006)
22. Cuttler L, Silvers J B, Singh J. et al Short stature and growth hormone therapy: a national study of physician recommendation patterns. JAMA 1996. 276531–537.537 [PubMed]
23. Vance M L, Mauras N. Growth hormone therapy in adults and children. N Engl J Med 1999. 3411206–1216.1216 [PubMed]
24. Anon Guidelines for the use of growth hormone in children with short stature: a report by the Drug and Therapeutics Committee of the Lawson Wilkins Pediatric Endocrine Society. J Pediatr 1995. 127857–867.867[PubMed]
25. Grace F, Baker J S, Davies B. Anabolic androgenic steroid (AAS) use in recreational gym users. J Subst Use2001. 6189–195.195
26. Grace F, Baker J S, Davies B. Blood pressure and rate pressure product response in males using high‐dose anabolic androgenic steroids (AAS) J Sci Med Sport 2003. 6307–12, 2728.12, 2728 [PubMed]

Articles from Journal of Medical Ethics are provided here courtesy of BMJ Group

This article can also be found on the National Center for Biotechnology Information (NCBI) website at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2563415/

Humans 2.0 with Jason Silva

This is one of the Shots of Awe videos created by Jason Silva.  It’s called HUMAN 2.0.  I don’t think a description is in order here since all the Shots of Awe videos are short and sweet.

Runtime: 2:15

Video Info:

Published on Dec 2, 2014

“Your trivial-seeming self tracking app is part of something much bigger. It’s part of a new stage in the scientific method, another tool in the kit. It’s not the only thing going on, but it’s part of that evolutionary process.” – Ethan Zuckerman paraphrasing Kevin Kelly

Steven Johnson
“Chance favors the connected mind.”
http://www.ted.com/talks/steven_johns…

Additional footage courtesy of Monstro Design and http://nats.aero

For more information on Norton security, please go here: http://us.norton.com/boldlygo

Join Jason Silva every week as he freestyles his way into the complex systems of society, technology and human existence and discusses the truth and beauty of science in a form of existential jazz. New episodes every Tuesday.

Watch More Shots of Awe on TestTube http://testtube.com/shotsofawe

Subscribe now! http://www.youtube.com/subscription_c…

Jason Silva on Twitter http://twitter.com/jasonsilva

Jason Silva on Facebook http://facebook.com/jasonlsilva

Jason Silva on Google+ http://plus.google.com/10290664595165…

This video can also be found at https://www.youtube.com/watch?v=fXB5-iwNah0

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years from SingularityHUB

This is an article from SingularityHub called, “Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years.”  For those of you already familiar with Ray Kurzweil, you’ve probably heard all this before, but this is a great introduction to his work if you are not already familiar with it.

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years

253337 39

In my new book BOLD, one of the interviews that I’m most excited about is with my good friend Ray Kurzweil.

Bill Gates calls Ray, “the best person I know at predicting the future of artificial intelligence.” Ray is also amazing at predicting a lot more beyond just AI.

This post looks at his very incredible predictions for the next 20+ years.

Ray Kurzweil.

So who is Ray Kurzweil?

He has received 20 honorary doctorates, has been awarded honors from three U.S. presidents, and has authored 7 books (5 of which have been national bestsellers).

He is the principal inventor of many technologies ranging from the first CCD flatbed scanner to the first print-to-speech reading machine for the blind. He is also the chancellor and co-founder of Singularity University, and the guy tagged by Larry Page to direct artificial intelligence development at Google.

In short, Ray’s pretty smart… and his predictions are amazing, mind-boggling, and important reminders that we are living in the most exciting time in human history.

But, first let’s look back at some of the predictions Ray got right.

Predictions Ray has gotten right over the last 25 years

In 1990 (twenty-five years ago), he predicted…

…that a computer would defeat a world chess champion by 1998. Then in 1997, IBM’s Deep Blue defeated Garry Kasparov.

… that PCs would be capable of answering queries by accessing information wirelessly via the Internet by 2010. He was right, to say the least.

… that by the early 2000s, exoskeletal limbs would let the disabled walk. Companies like Ekso Bionics and others now have technology that does just this, and much more.

In 1999, he predicted…

… that people would be able talk to their computer to give commands by 2009. While still in the early days in 2009, natural language interfaces like Apple’s Siri and Google Now have come a long way. I rarely use my keyboard anymore; instead I dictate texts and emails.

… that computer displays would be built into eyeglasses for augmented reality by 2009. Labs and teams were building head mounted displays well before 2009, but Google started experimenting with Google Glass prototypes in 2011. Now, we are seeing an explosion of augmented and virtual reality solutions and HMDs. Microsoft just released the Hololens, and Magic Leap is working on some amazing technology, to name two.

In 2005, he predicted…

… that by the 2010s, virtual solutions would be able to do real-time language translation in which words spoken in a foreign language would be translated into text that would appear as subtitles to a user wearing the glasses. Well, Microsoft (via Skype Translate), Google (Translate), and others have done this and beyond. One app called Word Lens actually uses your camera to find and translate text imagery in real time.

Ray’s predictions for the next 25 years

The above represent only a few of the predictions Ray has made.

While he hasn’t been precisely right, to the exact year, his track record is stunningly good.

Here are some of my favorite of Ray’s predictions for the next 25+ years.

If you are an entrepreneur, you need to be thinking about these. Specifically, how are you going to capitalize on them when they happen? How will they affect your business?

By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power (roughly the same as the human brain) will cost about $1,000.

By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.

By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.

By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.

By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

I want to make an important point.

It’s not about the predictions.

It’s about what the predictions represent.

Ray’s predictions are a byproduct of his (and my) understanding of the power of Moore’s Law, more specifically Ray’s “Law of Accelerating Returns” and of exponential technologies.

These technologies follow an exponential growth curve based on the principle that the computing power that enables them doubles every two years.

exponential-growth-of-computing-1

As humans, we are biased to think linearly.

As entrepreneurs, we need to think exponentially.

I often talk about the 6D’s of exponential thinking

Most of us can’t see the things Ray sees because the initial growth stages of exponential, DIGITIZED technologies are DECEPTIVE.

Before we know it, they are DISRUPTIVE—just look at the massive companies that have been disrupted by technological advances in AI, virtual reality, robotics, internet technology, mobile phones, OCR, translation software, and voice control technology.

Each of these technologies DEMATERIALIZED, DEMONETIZED, and DEMOCRATIZED access to services and products that used to be linear and non-scalable.

Now, these technologies power multibillion-dollar companies and affect billions of lives.

Image Credit: Shutterstock.com; Singularity University; Ray Kurzweil and Kurzweil Technologies, Inc./Wikimedia Commons

This article can also be found at http://singularityhub.com/2015/01/26/ray-kurzweils-mind-boggling-predictions-for-the-next-25-years/

The Guardian Interview with Ray Kurzweil

This is an excellent article from the Guardian entitled, “Are the robots about to rise? Google’s new director of engineering thinks so…”  If you’re just getting familiar with the concepts of the sigularity and machine learning and transhumanism… then this is an excellent article to read.  I think the Guardian did a great job of presenting Ray Kurzweil’s ideas open-mindedly and without bias while, at the same time, keeping a critical eye to the facts.  The following is a quote from this article which I found compelling, “…the Google knowledge graph, which consists of 800m (million) concepts and the billions of relationships between them. This is already a neural network, a massive, distributed global “brain”. Can it learn? Can it think? It’s what some of the smartest people on the planet are working on…”  Wow!

Are the robots about to rise? Google’s new director of engineering thinks so…

Ray Kurzweil popularised the Teminator-like moment he called the ‘singularity’, when artificial intelligence overtakes human thinking. But now the man who hopes to be immortal is involved in the very same quest – on behalf of the tech behemothSee our gallery of cinematic killer robots

Robot from The Terminator
The Terminator films envisage a future in which robots have become sentient and are at war with humankind. Ray Kurzweil thinks that machines could become ‘conscious’ by 2029, but believes they will augment us. Photograph: Solent News/Rex

With the fact that he believes that he has a good chance of living for ever? He just has to stay alive “long enough” to be around for when the great life-extending technologies kick in (he’s 66 and he believes that “some of the baby-boomers will make it through”). Or with the fact that he’s predicted that in 15 years’ time, computers are going to trump people. That they will be smarter than we are. Not just better at doing sums than us and knowing what the best route is to Basildon. They already do that. But that they will be able to understand what we say, learn from experience, crack jokes, tell stories, flirt. Ray Kurzweil believes that, by 2029, computers will be able to do all the things that humans do. Only better.

But then everyone’s allowed their theories. It’s just that Kurzweil’s theories have a habit of coming true. And, while he’s been a successful technologist and entrepreneur and invented devices that have changed our world – the first flatbed scanner, the first computer program that could recognise a typeface, the first text-to-speech synthesizer and dozens more – and has been an important and influential advocate of artificial intelligence and what it will mean, he has also always been a lone voice in, if not quite a wilderness, then in something other than the mainstream.

And now? Now, he works at Google. Ray Kurzweil who believes that we can live for ever and that computers will gain what looks like a lot like consciousness in a little over a decade is now Google’s director of engineering. The announcement of this, last year, was extraordinary enough. To people who work with tech or who are interested in tech and who are familiar with the idea that Kurzweil has popularised of “the singularity” – the moment in the future when men and machines will supposedly converge – and know him as either a brilliant maverick and visionary futurist, or a narcissistic crackpot obsessed with longevity, this was headline news in itself.

But it’s what came next that puts this into context. It’s since been revealed that Google has gone on an unprecedented shopping spree and is in the throes of assembling what looks like the greatest artificial intelligence laboratory on Earth; a laboratory designed to feast upon a resource of a kind that the world has never seen before: truly massive data. Our data. From the minutiae of our lives.

Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.

And those are just the big deals. It also bought Bot & Dolly, Meka Robotics, Holomni, Redwood Robotics and Schaft, and another AI startup, DNNresearch. It hired Geoff Hinton, a British computer scientist who’s probably the world’s leading expert on neural networks. And it has embarked upon what one DeepMind investor told the technology publication Re/code two weeks ago was “a Manhattan project of AI”. If artificial intelligence was really possible, and if anybody could do it, he said, “this will be the team”. The future, in ways we can’t even begin to imagine, will be Google’s.

There are no “ifs” in Ray Kurzweil’s vocabulary, however, when I meet him in his new home – a high-rise luxury apartment block in downtown San Francisco that’s become an emblem for the city in this, its latest incarnation, the Age of Google. Kurzweil does not do ifs, or doubt, and he most especially doesn’t do self-doubt. Though he’s bemused about the fact that “for the first time in my life I have a job” and has moved from the east coast where his wife, Sonya, still lives, to take it.

Ray Kurzweil photographed in San Francisco last year.
Ray Kurzweil photographed in San Francisco last year. Photograph: Zackary Canepari/Panos Pictures

Bill Gates calls him “the best person I know at predicting the future of artificial intelligence”. He’s received 19 honorary doctorates, and he’s been widely recognised as a genius. But he’s the sort of genius, it turns out, who’s not very good at boiling a kettle. He offers me a cup of coffee and when I accept he heads into the kitchen to make it, filling a kettle with water, putting a teaspoon of instant coffee into a cup, and then moments later, pouring the unboiled water on top of it. He stirs the undissolving lumps and I wonder whether to say anything but instead let him add almond milk – not eating dairy is just one of his multiple dietary rules – and politely say thank you as he hands it to me. It is, by quite some way, the worst cup of coffee I have ever tasted.

But then, he has other things on his mind. The future, for starters. And what it will look like. He’s been making predictions about the future for years, ever since he realised that one of the key things about inventing successful new products was inventing them at the right moment, and “so, as an engineer, I collected a lot of data”. In 1990, he predicted that a computer would defeat a world chess champion by 1998. In 1997, IBM’s Deep Blue defeated Garry Kasparov. He predicted the explosion of the world wide web at a time it was only being used by a few academics and he predicted dozens and dozens of other things that have largely come true, or that will soon, such as that by the year 2000, robotic leg prostheses would allow paraplegics to walk (the US military is currently trialling an “Iron Man” suit) and “cybernetic chauffeurs” would be able to drive cars (which Google has more or less cracked).

His critics point out that not all his predictions have exactly panned out (no US company has reached a market capitalisation of more than $1 trillion; “bioengineered treatments” have yet to cure cancer). But in any case, the predictions aren’t the meat of his work, just a byproduct. They’re based on his belief that technology progresses exponentially (as is also the case in Moore’s law, which sees computers’ performance doubling every two years). But then you just have to dig out an old mobile phone to understand that. The problem, he says, is that humans don’t think about the future that way. “Our intuition is linear.”

When Kurzweil first started talking about the “singularity”, a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test – the moment at which a computer will exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human – will be passed in 2029. The difference is that when he began saying it, the fax machine hadn’t been invented. But now, well… it’s another story.

“My book The Age of Spiritual Machines came out in 1999 and that we had a conference of AI experts at Stanford and we took a poll by hand about when you think the Turing test would be passed. The consensus was hundreds of years. And a pretty good contingent thought that it would never be done.

“And today, I’m pretty much at the median of what AI experts think and the public is kind of with them. Because the public has seen things like Siri [the iPhone’s voice-recognition technology] where you talk to a computer, they’ve seen the Google self-driving cars. My views are not radical any more. I’ve actually stayed consistent. It’s the rest of the world that’s changing its view.”

And yet, we still haven’t quite managed to get to grips with what that means. The Spike Jonze film, Her, which is set in the near future and has Joaquin Phoenix falling in love with a computer operating system, is not so much fantasy, according to Kurzweil, as a slightly underambitious rendering of the brave new world we are about to enter. “A lot of the dramatic tension is provided by the fact that Theodore’s love interest does not have a body,” Kurzweil writes in a recent review of it. “But this is an unrealistic notion. It would be technically trivial in the future to provide her with a virtual visual presence to match her virtual auditory presence.”

But then he predicts that by 2045 computers will be a billion times more powerful than all of the human brains on Earth. And the characters’ creation of an avatar of a dead person based on their writings, in Jonze’s film, is an idea that he’s been banging on about for years. He’s gathered all of his father’s writings and ephemera in an archive and believes it will be possible to retro-engineer him at some point in the future.

So far, so sci-fi. Except that Kurzweil’s new home isn’t some futuristic MegaCorp intent on world domination. It’s not Skynet. Or, maybe it is, but we largely still think of it as that helpful search engine with the cool design. Kurzweil has worked with Google’s co-founder Larry Page on special projects over several years. “And I’d been having ongoing conversations with him about artificial intelligence and what Google is doing and what I was trying to do. And basically he said, ‘Do it here. We’ll give you the independence you’ve had with your own company, but you’ll have these Google-scale resources.'”

And it’s the Google-scale resources that are beyond anything the world has seen before. Such as the huge data sets that result from 1 billion people using Google ever single day. And the Google knowledge graph, which consists of 800m concepts and the billions of relationships between them. This is already a neural network, a massive, distributed global “brain”. Can it learn? Can it think? It’s what some of the smartest people on the planet are working on next.

Peter Norvig, Google’s research director, said recently that the company employs “less than 50% but certainly more than 5%” of the world’s leading experts on machine learning. And that was before it bought DeepMind which, it should be noted, agreed to the deal with the proviso that Google set up an ethics board to look at the question of what machine learning will actually mean when it’s in the hands of what has become the most powerful company on the planet. Of what machine learning might look like when the machines have learned to make their own decisions. Or gained, what we humans call, “consciousness”.

Garry Kasparov ponders a move against IBM
Garry Kasparov ponders a move against IBM’s Deep Blue. Ray Kurzweil predicted the computer’s triumph. Photograph: Stan Honda/AFP/Getty Images

 

I first saw Boston Dynamics’ robots in action at a presentation at the Singularity University, the university that Ray Kurzweil co-founded and that Google helped fund and which is devoted to exploring exponential technologies. And it was the Singularity University’s own robotics faculty member Dan Barry who sounded a note of alarm about what the technology might mean: “I don’t see any end point here,” he said when talking about the use of military robots. “At some point humans aren’t going to be fast enough. So what you do is that you make them autonomous. And where does that end? Terminator?”

And the woman who headed the Defence Advanced Research Projects Agency (Darpa), the secretive US military agency that funded the development of BigDog? Regina Dugan. Guess where she works now?

Kurzweil’s job description consists of a one-line brief. “I don’t have a 20-page packet of instructions,” he says. “I have a one-sentence spec. Which is to help bring natural language understanding to Google. And how they do that is up to me.”

Language, he believes, is the key to everything. “And my project is ultimately to base search on really understanding what the language means. When you write an article you’re not creating an interesting collection of words. You have something to say and Google is devoted to intelligently organising and processing the world’s information. The message in your article is information, and the computers are not picking up on that. So we would like to actually have the computers read. We want them to read everything on the web and every page of every book, then be able to engage an intelligent dialogue with the user to be able to answer their questions.”

Google will know the answer to your question before you have asked it, he says. It will have read every email you’ve ever written, every document, every idle thought you’ve ever tapped into a search-engine box. It will know you better than your intimate partner does. Better, perhaps, than even yourself.

The most successful example of natural-language processing so far is IBM’s computer Watson, which in 2011 went on the US quiz show Jeopardy and won. “And Jeopardy is a pretty broad task. It involves similes and jokes and riddles. For example, it was given “a long tiresome speech delivered by a frothy pie topping” in the rhyme category and quickly responded: “A meringue harangue.” Which is pretty clever: the humans didn’t get it. And what’s not generally appreciated is that Watson’s knowledge was not hand-coded by engineers. Watson got it by reading. Wikipedia – all of it.

Kurzweil says: “Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM’s Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I’m doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn’t understand the implications of what it’s reading. It’s doing a sort of pattern matching. It doesn’t understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn’t understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.”

And once the computers can read their own instructions, well… gaining domination over the rest of the universe will surely be easy pickings. Though Kurzweil, being a techno-optimist, doesn’t worry about the prospect of being enslaved by a master race of newly liberated iPhones with ideas above their station. He believes technology will augment us. Make us better, smarter, fitter. That just as we’ve already outsourced our ability to remember telephone numbers to their electronic embrace, so we will welcome nanotechnologies that thin our blood and boost our brain cells. His mind-reading search engine will be a “cybernetic friend”. He is unimpressed by Google Glass because he doesn’t want any technological filter between us and reality. He just wants reality to be that much better.

“I thought about if I had all the money in the world, what would I want to do?” he says. “And I would want to do this. This project. This is not a new interest for me. This idea goes back 50 years. I’ve been thinking about artificial intelligence and how the brain works for 50 years.”

The evidence of those 50 years is dotted all around the apartment. He shows me a cartoon he came up with in the 60s which shows a brain in a vat. And there’s a still from a TV quiz show that he entered aged 17 with his first invention: he’d programmed a computer to compose original music. On his walls are paintings that were produced by a computer programmed to create its own original artworks. And scrapbooks that detail the histories of various relatives, the aunts and uncles who escaped from Nazi Germany on the Kindertransport, his great grandmother who set up what he says was Europe’s first school to provide higher education for girls.

Jeopardy is won my a machine
Kurzweil suggests that language is the key to teaching machines to think. He says his job is to ‘base search on really understanding what the language means’.The most successful example of natural-language processing to date is IBM’s computer Watson, which in 2011 went on the US quiz show Jeopardy and won (shown above). Photograph: AP

 

His home is nothing if not eclectic. It’s a shiny apartment in a shiny apartment block with big glass windows and modern furnishings but it’s imbued with the sort of meaning and memories and resonances that, as yet, no machine can understand. His relatives escaped the Holocaust “because they used their minds. That’s actually the philosophy of my family. The power of human ideas. I remember my grandfather coming back from his first return visit to Europe. I was seven and he told me he’d been given the opportunity to handle – with his own hands – original documents by Leonardo da Vinci. He talked about it in very reverential terms, like these were sacred documents. But they weren’t handed down to us by God. They were created by a guy, a person. A single human had been very influential and had changed the world. The message was that human ideas changed the world. And that is the only thing that could change the world.”

On his fingers are two rings, one from the Massachusetts Institute of Technology, where he studied, and another that was created by a 3D printer, and on his wrist is a 30-year-old Mickey Mouse watch. “It’s very important to hold on to our whimsy,” he says when I ask him about it. Why? “I think it’s the highest level of our neocortex. Whimsy, humour…”

Even more engagingly, tapping away on a computer in the study next door I find Amy, his daughter. She’s a writer and a teacher and warm and open, and while Kurzweil goes off to have his photo taken, she tells me that her childhood was like “growing up in the future”.

Is that what it felt like? “I do feel little bit like the ideas I grew up hearing about are now ubiquitous… Everything is changing so quickly and it’s not something that people realise. When we were kids people used to talk about what they going to do when they were older, and they didn’t necessarily consider how many changes would happen, and how the world would be different, but that was at the back of my head.”

And what about her father’s idea of living for ever? What did she make of that? “What I think is interesting is that all kids think they are going to live for ever so actually it wasn’t that much of a disconnect for me. I think it made perfect sense. Now it makes less sense.”

Well, yes. But there’s not a scintilla of doubt in Kurzweil’s mind about this. My arguments slide off what looks like his carefully moisturised skin. “My health regime is a wake-up call to my baby-boomer peers,” he says. “Most of whom are accepting the normal cycle of life and accepting they are getting to the end of their productive years. That’s not my view. Now that health and medicine is in information technology it is going to expand exponentially. We will see very dramatic changes ahead. According to my model it’s only 10-15 years away from where we’ll be adding more than a year every year to life expectancy because of progress. It’s kind of a tipping point in longevity.”

He does, at moments like these, have something of a mad glint in his eye. Or at least the profound certitude of a fundamentalist cleric. Newsweek, a few years back, quoted an anonymous colleague claiming that, “Ray is going through the single most public midlife crisis that any male has ever gone through.” His evangelism (and commercial endorsement) of a whole lot of dietary supplements has more than a touch of the “Dr Gillian McKeith (PhD)” to it. And it’s hard not to ascribe a psychological aspect to this. He lost his adored father, a brilliant man, he says, a composer who had been largely unsuccessful and unrecognised in his lifetime, at the age of 22 to a massive heart attack. And a diagnosis of diabetes at the age of 35 led him to overhaul his diet.

But isn’t he simply refusing to accept, on an emotional level, that everyone gets older, everybody dies?

“I think that’s a great rationalisation because our immediate reaction to hearing someone has died is that it’s not a good thing. We’re sad. We consider it a tragedy. So for thousands of years, we did the next best thing which is to rationalise. ‘Oh that tragic thing? That’s really a good thing.’ One of the major goals of religion is to come up with some story that says death is really a good thing. It’s not. It’s a tragedy. And people think we’re talking about a 95-year-old living for hundreds of years. But that’s not what we’re talking about. We’re talking radical life extension, radical life enhancement.

“We are talking about making ourselves millions of times more intelligent and being able to have virtually reality environments which are as fantastic as our imagination.”

Although possibly this is what Kurzweil’s critics, such as the biologist PZ Myers, mean when they say that the problem with Kurzweil’s theories is that “it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.” Or Jaron Lanier, who calls him “a genius” but “a product of a narcissistic age”.

But then, it’s Kurzweil’s single-mindedness that’s been the foundation of his success, that made him his first fortune when he was still a teenager, and that shows no sign of letting up. Do you think he’ll live for ever, I ask Amy. “I hope so,” she says, which seems like a reasonable thing for an affectionate daughter to wish for. Still, I hope he does too. Because the future is almost here. And it looks like it’s going to be quite a ride.

 

This article can also be found on the Guardian website.