Don’t Fear Artificial Intelligence by Ray Kurzweil

This is an article from TIME by Ray Kurzweil called Don’t Fear Artificial Intelligence.  Basically, Kurzweil’s stance is that “technology is a double-edged sword” and that it always has been, but that’s no reason to abandon the research.  Kurzweil also states that, “Virtually every­one’s mental capabilities will be enhanced by it within a decade.”  I hope it makes people smarter and not just more intelligent! 

Don’t Fear Artificial Intelligence

Retro toy robot
Getty Images

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller “How to Create a Mind.”

Two great thinkers see danger in AI. Here’s how to make it safe.

Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar ­Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Consideration of ethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.” The median view of AI practitioners today is that we are still several decades from achieving human-­level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-­machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-­quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller How to Create a Mind.


This article can also be found here.


PostHuman: An Introduction to Transhumanism from the British Institute of Posthuman Studies

This video by the British Institute of Posthuman Studies explores three factors of transhumanism; super longevity, super intelligence, and super well-being.  Its called PostHuman: An Introduction to Transhumanism and it’s a great video to show your friends who have never heard of transhumanism or the technological singularity.  

Runtime: 11:11

This video can also be found at

Video Info:

Published on Nov 5, 2013

We investigate three dominant areas of transhumanism: super longevity, super intelligence and super wellbeing, and briefly cover the ideas of thinkers Aubrey de Grey, Ray Kurzweil and David Pearce.

Official Website:

Written by: Peter Brietbart and Marco Vega
Animation & Design Lead: Many Artists Who Do One Thing (Mihai Badic)
Animation Script: Mihai Badic and Peter Brietbart
Narrated by: Holly Hagan-Walker
Music and SFX: Steven Gamble
Design Assistant: Melita Pupsaite
Additional Animation: Nicholas Temple
Other Contributors: Callum Round, Asifuzzaman Ahmed, Steffan Dafydd, Ben Kokolas, Cristopher Rosales
Special Thanks: David Pearce, Dino Kazamia, Ana Sandoiu, Dave Gamble, Tom Davis, Aidan Walker, Hani Abusamra, Keita Lynch


The Social Futurist policy toolkit by Amon Twyman

This is an article by Amon Twyman at the Institute for Ethics & Emerging Technologies (IEET).  The article (called The Social Futurist Policy Toolkit) lays out a basic blueprint for Social Futurist policy.  Basically, it’s a kind of proposal for post-scarcity economics.  

The Social Futurist policy toolkit

Amon Twyman

By Amon Twyman

Posted: Apr 27, 2014

In a recent blog post and IEET article, I laid out an extremely general critique of Capitalism’s place within our society, and the barest outline of an alternative known as Social Futurism. The essence of that article was that Capitalism does certain things very well but it cannot be paused or adjusted when its effects become problematic, that rapid technological change appears to be on the verge of making certain alternatives viable, and that unfortunately we may be forced to fight for our right to personally choose those alternatives.

That article was necessarily brief and very broad, which did not allow me the opportunity to address policy details of any sort. It would be unfortunate if people thought that meant Social Futurism has no specific ideas at its disposal, so I want to lay out a kind of “policy toolkit”, here. The following policy categories are not compulsory features of any Social Futurist movement or group, but are more like basic building blocks from which specific policy configurations could be adapted to local conditions. Similarly, the toolkit as it currently stands is in no way considered exhaustive.

It is my intent that this toolkit should form a kind of bridge between the broadest, most general level of political discussion on the one hand, and the development of specific policies for local groups on the other. The six basic policy categories are only very briefly discussed below, but will each soon be analysed fully by the WAVE research institute.

Finally, none of the ideas presented in this article are new (section 6 being my only novel contribution), but this mix is seldom presented in a single ‘chunk‘ that can be easily memorised and communicated. It is my hope that in time the label “Social Futurism” may act as the natural intersection of these disparate-but-compatible ideas, enabling people to refer to an array of possible solutions to major problems in two words rather than two thousand.

1. Evidence, Balance, & Transition

All of the policies in this toolkit should be approached from a pragmatic and flexible (rather than an ideologically constrained) point of view. When trying to be pragmatic and flexible, our main concern is with policies that actually solve problems, so the use of empirical evidence is central to Social Futurism. Policy development and review should emphasise the setting of quantifiable goals and application of empirical evidence wherever that is an option, to encourage policy that evolves to better meet our goals over time.

In this vein, we should seek to find optimal balances between extreme ideological positions, to the extent that any given choice may be viewed as a continuum rather than a binary choice. An extremely important example is the question of transition, which is to say the process of development from our current PEST (political, economic, social, technological) situation to a more efficient and just society. Often political questions are depicted as a false dichotomy, or choice between things as they are and radical utopias entirely disconnected from current reality. What is both preferable and more tractable is an intelligent balance of the past and future, in the form of a pragmatic transition phase.

For example, sections 2-4 below propose a series of economic adjustments to society. From the perspective of someone invested in the status quo, they are extremely radical suggestions. From the perspective of a radical utopian, they are half-measures at best. From a Social Futurist perspective, they are required to maximise the likelihood of a better society actually coming into existence, while attempting to minimise the risk of severe societal destabilisation caused by rapid and untested change. My own vision of a societal transition phase follows an observation from Ray Kurzweil, in which change often takes longer than anticipated, but also ends up being much deeper than anticipated, meaning that focus on a transition phase may allow us to work toward truly radical transformative change in the longer term.

In short, the effectiveness of our methods should be tested by looking at evidence, we should balance our policies in a flexible and pragmatic manner, and we should seek a staged transition toward a better future rather than risk critically destabilizing society.

2. Universal Basic Income & LVAT

A minimal, “safety net” style Universal Basic Income should be established. This is as opposed to putting undue strain on the economy by introducing a basic income larger than is required to satisfy essential living requirements. Where possible, the UBI should be paid for by a combination of dismantling welfare bureaucracies, and Land Value & Automation Taxes (LVAT).

LVAT is the extension of traditional Land Value Tax to include a small tax on every unit of workplace automation equivalent to a single human being replaced. This extension of LVT is intended to harness the economic momentum of workplace automation, which is expected to be the principal cause of technological unemployment in coming decades. The tax should be considerably less than the cost of hiring a human, thus causing no disincentive to automation (some would argue that any tax would disincentivize automation, but our goal is not to encourage automation, and as long as automation is cheaper than human labour it will win out). The LVAT would take the place of increasing numbers of arbitrary taxes on goods and services which are currently being added and increased to shore up Western economies.

Social Futurism is compatible with private property ownership and does not advocate property confiscation. Wealth redistribution is only advocated to the degree that it can be achieved through LVAT & UBI as described above. The extent to which people should be able to choose if, how, and to whom they pay tax is addressed in section 6. It is also worth noting here that where a functional equivalent of UBI exists (e.g. citizen shares in Distributed Autonomous Cooperatives) which is proven more effective, then Social Futurists should favour the more effective solution as per point 1.

3. Abolition of Fractional Reserve Banking

Fractional Reserve Banking is the process by which banks are required to hold only a fraction of their customers’ deposits in reserve, allowing the money supply to grow to a multiple of the base amount held in reserve. Through this practice, central banks may charge interest on the money they create (thereby creating a debt which can never be repaid, across society as a whole) and expose the entire economy to risk when they cannot meet high demand for withdrawals. Fractional Reserve Banking fosters potentially critical risk to the entirety of society for the benefit of only a tiny proportion of citizens, and therefore should be abolished. The alternative to Fractional Reserve Banking is Full Reserve or 100% Reserve Banking, in which all banks must hold the full amount of deposits in reserve at all times.

Full Reserve Banking is much more conservative than Fractional Reserve Banking, and would signal an end to “easy credit”. In turn, it would afford enough stability to see our society through a sustainable transition phase, until technological post-scarcity makes reliance on traditional banking systems and the Capitalist principle of surplus value itself unnecessary.

4. Responsible Capitalism, Post-Scarcity, & Emergent Commodity Markets

Social Futurist policy must favour the encouragement of responsible trade and strong regulation of reckless behaviour, with an eye to making Capitalism an engine of society rather than its blind master. To this end, it should be Social Futurist policy that all companies that wish to operate within any given community must be registered with the appropriate regulation bodies employed by that community. Non-regulation and self-regulation by industries which are not accountable to the communities they affect is unacceptable. (For the purposes of this brief statement I have conflated Capitalism and markets, despite the fact that trade existed millennia before the organization of society around profit based on Capital investment. These issues will be treated separately and extensively, later).

Where possible, Social Futurists should advocate the transition to non-monetary peer-to-peer resource management under post-scarcity conditions. In other words, we should seek to avoid the creation or maintenance of artificial scarcity in essential resources. A continuing place for trade even under post-scarcity conditions is acknowledged and encouraged where it reduces artificial scarcity, promotes technical innovation, and serves the needs and directives of the community. Emergent commodities (e.g. natural artificial scarcities such as unique artworks) will need a framework for responsible trade even under optimal post-scarcity conditions, so it behooves us to develop such frameworks now, in the context of contemporary Capitalism.

5. Human autonomy, privacy, & enhancement

Social Futurism incorporates the transhumanist idea that the human condition can and should be improved through the intelligent and compassionate application of technology. We also strongly emphasise voluntarism, and in combination these things necessitate the championing of people’s rights over their own bodies and information. It should be Social Futurist policy to oppose any development by which people would lose individual sovereignty or involuntarily cede ownership of their personal information. Social Futurists must also defend the individual’s right to modify themselves by technological means, provided that the individual is a mentally competent consenting adult and the modification would not pose significant risk of harm to others.

6. Establishment of VDP (Virtual, Distributed, Parallel) States

The principle of subsidiarity holds that organizational responsibility should be devolved to the lowest or most local level capable of dealing with the situation. In other words, power should be decentralised, insofar as that doesn’t diminish our ability to face challenges as a society.

For example, local governance issues should be handled by local rather than national-level government where possible. Social Futurism takes subsidiarity to its logical conclusion, by insisting that people should have the right to govern their own affairs as they see fit, as long as by doing so they are not harming the wider community. On the other side of the coin, broader (e.g. national and transnational) levels of governance would be responsible for issues that local organizations and individuals could not competently face alone.

Where global governance is needed, the model should be one of cooperating global agencies focused on a specific area of expertise (e.g. the World Health Organization), rather than a single government acting in a centralised manner to handle all types of issue. In this way, decentralization of power applies even when an issue cannot be resolved on the local level.

In order to encourage the development of such a system, we advocate the establishment of communities with powers of self-governance known as VDP States, where VDP stands for “Virtual, Distributed, Parallel”. ‘Virtual’ refers to online community, orthogonal to traditional geographic territories. ‘Distributed’ refers to geographic States, but ones where different parts of the community exist in different locations, as a network of enclaves. ‘Parallel’ refers to communities that exist on the established territory of a traditional State, acting as a kind of organizational counterpoint to that State’s governing bodies. Two or three of these characteristics may be found in a single VDP State, but it is expected that most such communities would emphasise one characteristic over the others. Alternatively, a VDP State may emphasise different characteristics at different stages in its development.

Given Social Futurist emphasis on voluntarism, VDP State citizenship must be entirely voluntary. Indeed, the entire point of the VDP State is to broaden the range of governance models which people may voluntarily choose to engage with, where they are currently told that they simply have to accept a single model of governance.

As this is clearly a new and experimental approach to governance, it is to be expected that many ideas associated with it are still to be properly developed and tested. Some of these ideas may not meet our own standards of empirical review. However, to briefly anticipate some common objections it is worth noting several points. Firstly, decentralization does not imply an absence of social organization. It simply means that people can exercise more choice in how they engage with society. Secondly, yes it is true that all three of the VDP characteristics have limitations as well as strengths (e.g. difficulty in defending isolated enclaves), but that is why any given VDP State would find the mix of features that suits its purpose and context best. Thirdly, as mentioned earlier in this article, different approaches may be mixed and balanced as necessary, such as a single-location VDPS being used as a template for the later creation of a distributed network of communities. Finally, the VDPS idea is not intended to stand alone but to complement any initiatives which have the potential to maximize its value (Open Source Ecology, for example).

Further development of these ideas will be posted on the WAVE movement blog.

Addendum: A note on Marxism

Below I give an example of the point made in section 1 (about balance and transition), which draws upon a Marxist viewpoint because Social Futurist concerns tend to be shared by Marxists, but the logic would equally apply to movements whose long-term ideals and methods are more like our own, such as The Zeitgeist Movement. I have put this note to one side because I do not want to give an incorrect first impression that Social Futurism is Marxist in nature. It is simply intended to address societal problems which have already been comprehensively analysed by Marxists, so it is worth noting the relevance of their point of view to our own.

Marx argued that the root problem with Capitalism is surplus value. This means that Capitalists (i.e. investors) pay workers only a proportion of the value of what is produced by their work, and the remaining (“surplus”) value is taken as profit by the Capital owning class, along with rent and interest on debts. Marxists assert that workers should collectively own the means of production (i.e. factories, machines, resources, all Capital), thereby ending surplus value and phenomena such as problematic banking practices along with it. From this perspective it might be reasonably suggested that “treating the symptoms” rather than the core disorder would be fruitless (or worse, dangerous), and that citizen benefits of any sort should be paid for by distributing all profit from collectively owned means of production equally.

Without wishing to get into a discussion of whether ideal Marxism is possible or doomed to give rise to historical Communist authoritarianism, I would say that even a benign Marxist revolution would entirely destabilize society if it occurred too quickly. Social Futurism does not deny the Marxist analysis of the problem, but seeks a staged transition to a post-Capitalist society which does not attempt to undermine the entire basis of our current society in a single move. Although an optimal, long-term Social Futurist outcome may not be desirable to some Marxists (and certainly not to historical Stalinists or Maoists), it would definitely involve the eventual transition to democratic, decentralised post-scarcity, and removal of Capitalist surplus value as the central organizational principle of our civilization.


Dr M. Amon Twyman (BSc, MSc Hons, DPhil) is an Affiliate Scholar of the IEET and a philosopher interested in the impact of technology on society and the human condition. Amon was a co-founder of the UK Transhumanist Association (now known as Humanity+ UK), and went on to establish Zero State and the WAVE research institute.


This article can also be found at

Ray Kurzweil – How to Create a Mind

This is one of the longer presentations I’ve seen by Ray Kurzweil.  In the video, Kurzweil discusses some of the concepts behind his latest book, How to Create a Mind.  This talk covers a lot of ground; everywhere from the Kurzweil’s Law (Law of Accelerating Returns), merging with technology, pattern recognizing technology, the effects of economy on life expectancy, solar energy, medical technology, education…  Well, you get the picture.  Check it out.

Runtime: 1:01:00

This video can also be found at

Video info:

Published on Jun 17, 2014



NASA and Singularity University

This isn’t an article so much as it is a memo posted on the NASA website.  Basically, the ‘article’ states that NASA supports the Singularity University endeavor.  This is actually kind of old news (from 2009), but part of the mission of Dawn of Giants is to convince people of the need to take transhumanism and the idea of the technological singularity seriously.  Maybe the support of government agencies like NASA and DARPA will help to this end.  

NASA Ames Becomes Home To Newly Launched Singularity University

Rachel Prucey – Ames Research Center, Moffett Field, Calif.

Denise Vardakas – Singularity University, Moffett Field, Calif.

Feb. 03, 2009

MOFFETT FIELD, Calif., — Technology experts and entrepreneurs with a passion for solving humanity’s grand challenges, will soon have a new place to exchange ideas and facilitate the use of rapidly developing technologies.

NASA Ames Research Center today announced an Enhanced Use Lease Agreement with Singularity University (SU) to house a new academic program at Ames’ NASA Research Park. The university will open its doors this June and begin offering a nine-week graduate studies program, as well as three-day chief executive officer-level and 10-day management-level programs. The SU curriculum provides a broad, interdisciplinary exposure to ten fields of study: future studies and forecasting; networks and computing systems; biotechnology and bioinformatics; nanotechnology; medicine, neuroscience and human enhancement; artificial intelligence, robotics, and cognitive computing; energy and ecological systems; space and physical sciences; policy, law and ethics; and finance and entrepreneurship.

“The NASA Ames campus has a proud history of supporting ground-breaking innovation, and Singularity University fits into that tradition,” said S. Pete Worden, Ames Center Director and one of Singularity University’s founders. “We’re proud to help launch this unique graduate university program and are looking forward to the new ideas, technologies and social applications that result.”

Singularity University was founded Sept. 20, 2008 by a group of leaders, including Worden; Ray Kurzweil, author and futurist; Peter Diamandis, space entrepreneur and chairman of the X PRIZE Foundation; Robert Richards, co-founder of the International Space University; Michael Simpson, president of the International Space University; and a group of SU associate founders who have contributed time and capital.

“With its strong focus on interdisciplinary learning, Singularity University is poised to foster the leaders who will create a uniquely creative and productive future world,” said Kurzweil.


NASA Ames would like to eliminate confusion that might have arisen concerning NASA personnel as “Founders” of Singularity University in the Feb. 3, 2009 news release, “NASA Ames Becomes Home To Newly Launched Singularity University.”

NASA Ames Center Director S. Pete Worden hosted SU’s Founders Conference on Sept. 20, 2008 at NASA Ames. On NASA’s behalf he and other Ames personnel provided input to SU’s founders and encouraged the scientific and technical discussions. Neither Dr. Worden nor any other NASA employee is otherwise engaged in the University’s operation nor do any NASA Ames employees have personal or financial interests in Singularity University. As with other educational institutions, NASA employees may support educational activities of SU through lectures, discussions and interactions with students and staff. NASA employees may also attend SU as students.

For more information about Singularity University, visit:

For more information about NASA programs, visit:


This can also be found at – Ray Kurzweil: The Exponential Mind

Chris Raymond at interview Ray Kurzweil.  The article is called Ray Kurzweil: The Exponential Mind.  It follows the usual Kurzwelian interview parameters (a little background, explain exponential growth with examples, discuss where technology is taking us), but it also goes into some of the things his critics have to say and talks a bit about Kurzweil’s new role at Google.  


Ray Kurzweil: The Exponential Mind

The inventor, scientist, author, futurist and director of engineering at Google aims to help mankind devise a better world by keeping tabs on technology, consumer behavior and more.

Chris Raymond

Ray Kurzweil is not big on small talk. At 3:30 on a glorious early summer afternoon, the kind that inspires idle daydreams, he strides into a glass-walled, fifth-floor conference room overlooking the leafy tech town of Waltham, Mass.

Lowering himself into a chair, he looks at his watch and says, “How much time do you need?”

It doesn’t quite qualify as rude. He’s got a plane to catch this evening, and he’s running nearly two hours behind schedule. But there is a hint of menace to the curtness, a subtle warning to keep things moving. And this is certainly in keeping with Kurzweil’s M.O.

“If you spend enough time with him, you’ll see that there’s very little waste in his day,” says director Barry Ptolemy, who tailed Kurzweil for more than two years while filming the documentary Transcendent Man. “His nose is always to the grindstone; he’s always applying himself to the next job, the next interview, the next book, the next little task.”

It would appear the 66-year-old maverick has operated this way since birth. He decided to become an inventor at age 5, combing his Queens, N.Y., neighborhood for discarded radios and bicycle parts to assemble his prototypes. In 1965, at age 17, he unveiled an early project, a computer capable of composing music, on the Steve Allen TV show I’ve Got a Secret. He made his first trip to the White House that same year, meeting with Lyndon Johnson, along with other young scientists uncovered in a Westinghouse talent search. As a sophomore at MIT, he launched a company that used a computer to help high school students find their ideal college. Then at 20, he sold the firm to a New York publisher for $100,000, plus royalties.

The man has been hustling since he learned how to tie his shoes.

Though he bears a slight resemblance to Woody Allen—beige slacks, open collar, reddish hair, glasses—he speaks with the baritone authority of Henry Kissinger. He brings an engineer’s sense of discipline to each new endeavor, pinpointing the problem, surveying the options, choosing the best course of action. “He’s very good at triage, very good at compartmentalizing,” says Ptolemy.

A bit ironically, Kurzweil describes his first great contribution to society—the technology that first gave computers an audible voice—as a solution he developed in the early 1970s for no problem in particular. After devising a program that allowed the machines to recognize letters in any font, he pursued market research to decide how his advancement could be useful. It wasn’t until he sat next to a blind man on an airplane that he realized his technology could shatter the inherent limitations of Braille; only a tiny sliver of books had been printed in Braille, and no topical sources—newspapers, magazines or office memos—were available in that format.

Kurzweil and a team that included engineers from the National Federation for the Blind built around his existing software to make text-to-speech reading machines a reality by 1976. “What really motivates an innovator is that leap from dry formulas on a blackboard to changes in people’s lives,” Kurzweil says. “It’s very gratifying for me when I get letters from blind people who say they were able to get a job or an education due to the reading technology that I helped create…. That’s really the thrill of being an innovator.”

The passion for helping humanity has pushed Kurzweil to establish double-digit companies over the years, pursuing all sorts of technological advancements. Along the way, his sleepy eyes have become astute at seeing into the future.

In The Age of Intelligent Machines, first published in 1990, Kurzweil started sharing his visions with the public. At the time they sounded a lot like science fiction, but a startling number of his predictions came true. He correctly predicted that by 1998 a computer would win the world chess championship, that new modes of communication would bring about the downfall of the Soviet Union, and that millions of people worldwide would plug into a web of knowledge. Today, he is the author of five best-selling books, including The Singularity Is Near and How to Create a Mind.

This wasn’t his original aim. In 1981, when he started collecting data on how rapidly computer technology was evolving, it was for purely practical reasons.

“Invariably people create technologies and business plans as if the world is never going to change,” Kurzweil says. As a result, their companies routinely fail, even though they successfully build the products they promise to produce. Visionaries see the potential, but they don’t plot it out correctly. “The inventors whose names you recognize were in the right place with the right idea at the right time,” he explains, pointing to his friend Larry Page, who launched Google with Sergey Brin in 1998, right about the time the founders of legendary busts and discovered mankind wasn’t remotely ready for Internet commerce.

How do you master timing? You look ahead.

“My projects have to make sense not for the time I’m looking at, but the world that will exist when I finish,” Kurzweil says. “And that world is a very different place.”

In recent years, companies like Ford, Hallmark and Hershey’s have recognized the value in this way of thinking, hiring expert guides like Kurzweil to help them study the shifting sands and make sense of the road ahead. These so-called “futurists” keep a careful eye on scientific advances, consumer behavior, market trends and cultural leanings. According to Intel’s resident futurist, Brian David Johnson, the goal is not so much to predict the future as to invent it. “Too many people believe that the future is a fixed point that we’re powerless to change,” Johnson recently told Forbes. “But the reality is that the future is created every day by the actions of people.”

Kurzweil subscribes to this notion. He has boundless confidence in man’s ability to construct a better world. This isn’t some utopian dream. He has the data to back it up—and a team of 10 researchers who help him construct his mathematical models. They’ve been plotting the price and computing power of information technologies—processing speed, data storage, that sort of thing—for decades.

In his view, we are on the verge of a great leap forward, an age of unprecedented invention, the kinds of breakthroughs that can lead to peace and prosperity and make humans immortal. In other words, he has barely begun to bend time to his will.

Ray Kurzweil does not own a crystal ball. The secret to his forecasting success is “exponential thinking.”

Our minds are trained to see the world linearly. If you drive at this speed, you will reach your destination at this time. But technology evolves exponentially. Kurzweil calls this the Law of Accelerating Returns.

He leans back in his chair to retrieve his cellphone and holds it aloft between two fingers. “This is several billion times more powerful than the computer I used as an undergraduate,” he says, and goes on to point out that the device is also about 100,000 times smaller. Whereas computers once took up entire floors at university research halls, far more advanced models now fit in our pockets (and smaller spaces) and are becoming more miniscule all the time. This is a classic example of exponential change.

The Human Genome Project is another. Launched in 1990, it was billed from the start as an ambitious, 15-year venture. Estimated cost: $3 billion. When researchers neared the time line’s halfway point with only 3 percent of the DNA sequencing finished, critics were quick to pounce. What they did not see was the annual doubling in output. Thanks to increases in computing power and efficiency, 3 percent became 6 percent and then 12 percent and so on. With a few more doublings, the project was completed a full two years ahead of schedule.

That is the power of exponential change.

“If you take 30 steps linearly, you get to 30,” Kurzweil says. “If you take 30 steps exponentially, you’re at a billion.”

The fruits of these accelerating returns are all around us. It took more than 15 years to sequence HIV beginning in the 1980s. Thirty-one days to sequence SARS in 2003. And today we can map a virus in a single day.

While thinking about the not-too-distant future, when virtual reality and self-driving cars, 3-D printing and Google Glass are norms, Kurzweil dreams of the next steps. In his vision, we’re rapidly approaching the point where human power becomes infinite.

Holding the phone upright, he swipes a finger across the glass.

“When I do this, my fingers are connected to my brain,” Kurzweil says. “The phone is an extension of my brain. Today a kid in Africa with a smartphone has access to all of human knowledge. He has more knowledge at his fingertips than the president of the United States did 15 years ago.” Multiplying by exponents of progress, Kurzweil projects continued shrinkage in computer size and growth in power over the next 25 years. He hypothesizes microscopic nanobots—inexpensive machines the size of blood cells—that will augment our intelligence and immune systems. These tiny technologies “will go into our neocortex, our brain, noninvasively through our capillaries and basically put our neocortex on the cloud.”

Imagine having Wikipedia linked directly to your brain cells. Imagine digital neurons that reverse the effects of Parkinson’s disease.Maybe we can live forever.

He smiles, letting the sweep of his statements sink in. Without question, it is an impressive bit of theater. He loves telling stories, loves dazzling people with his visions. But his zeal for showmanship has been known to backfire.

The biologist P.Z. Myers has called him “one of the greatest hucksters of the age.” Other critics have labeled him crazy and called his ideas hot air. Kurzweil’s public pursuit of immortality doesn’t help matters. In an effort to prolong his life, Kurzweil takes 150 supplements a day, washing them down with cup after cup of green tea and alkaline water. He monitors the effects of these chemistry experiments with weekly blood tests. It’s one of a few eccentricities.

“He’s extremely honest and direct,” Ptolemy says of his friend’s prickly personality. “He talks to people and if he doesn’t like what you’re saying, he’ll just say it. There’s no B.S. If he doesn’t like what he’s hearing, he’ll just say, ‘No. Got anything  else?’”

But it’s hard to argue with the results. Kurzweil claims 86 percent of his predictions for the year 2009 came true. Others insist the figure is actually much lower. But that’s just part of the game. Predicting is hard work.

“He was considered extremely radical 15 years ago,” Ptolemy says. “That’s less the case now. People are seeing these technologies catch up—the iPhone, Google’s self-driving cars, Watson [the IBM computer that bested Jeopardy genius Ken Jennings in 2011]. All these things start happening, and people are like, ‘Oh, OK. I see what’s going on.’”

Ray Kurzweil was born into a family of artists. His mother was a painter; his father, a conductor and musician. Both moved to New York from Austria in the late 1930s, fleeing the horrors of Hitler’s Nazi regime. When Ray was 7 years old, his maternal grandfather returned to the land of his birth, where he was given the chance to hold in his hands documents that once belonged to the great Leonardo da Vinci—painter, sculptor, inventor, thinker. “He described the experience with reverence,” Kurzweil writes, “as if he had touched the work of God himself.”

Ray’s parents raised their son and daughter in the Unitarian Church, encouraging them to study the teachings of various religions to arrive at the truth. Ray is agnostic, in part, he says, because religions tend to rationalize death; but like Da Vinci, he firmly believes in the power of ideas—the ability to overcome pain and peril, to transcend life’s challenges with reason and thought. “He wants to change the world—impact it as much as possible,” Ptolemy says. “That’s what drives him.”

Despite what his critics say, Kurzweil is not blind to the threats posed by modern science. If nanotechnology could bring healing agents into our bodies, nano-hackers or nano-terrorists could spread viruses—the literal, deadly kind. “Technology has been a double-edged sword ever since fire,” he says. “It kept us warm, cooked our food, but also burned down our villages.” That doesn’t mean you keep it under lock and key.

In January of 2013, Kurzweil entered the next chapter of his life, dividing his time between Waltham and San Francisco, where he works with Google engineers to deepen computers’ understanding of human language. “It’s my first job with a company I didn’t start myself,” he deadpans. The idea is to move the company beyond keyword search, to teach computers how to grasp the meaning and ideas in the billions of documents at their disposal, to move them one more step forward on the journey to becoming sentient virtual assistants—picture Joaquin Phoenix’s sweet-talking laptop in 2013’s Kurzweil-influenced movie Her, a Best Picture nominee.

Kurzweil had pitched the idea of breaking computers’ language barrier to Page while searching for investors. Page offered him a full-time salary and Google-scale resources instead, promising to give Kurzweil the independence he needs to complete the project. “It’s a courageous company,” Kurzweil says. “It has a biz model that supports very widespread distribution of these technologies. It’s the only place I could do this project. I would not have the resources, even if I raised all the money I wanted in my own company. I wouldn’t be able to run algorithms on a million computers.”

That’s not to say Page will sit idle while Kurzweil toils away. In the last year, the Google CEO has snapped up eight robotics companies, including industry frontrunner Boston Dynamics. He paid $3.2 billion for Nest Labs, maker of learning thermostats and smoke alarms. He scooped up the artificial intelligence startup DeepMind and lured Geoffrey Hinton, the world’s foremost expert on neural networks—computer systems that function like a brain—into the Google fold.

Kurzweil’s ties to Page run deep. Google (and NASA) provided early funding for Singularity University, the education hub/startup accelerator Kurzweil launched with the XPRIZE’s Peter Diamandis to train young leaders to use cutting-edge technology to make life better for billions of people on Earth.

Kurzweil’s faith in entrepreneurship is so strong that he believes it should be taught in elementary school.


Because that kid with the cellphone now has a chance to change the world. If that seems far-fetched, consider the college sophomore who started Facebook because he wanted to meet girls or the 15-year-old who recently invented a simple new test for pancreatic cancer. This is one source of his optimism. Another? The most remarkable thing about the mathematical models Kurzweil has assembled, the breathtaking arcs that demonstrate his thinking, is that they don’t halt their climb for any reason—not for world wars, not for the Great Depression.

Once again, that’s the power of exponential growth.

“Things that seemed impossible at one point are now possible,” Kurzweil says. “That’s the fundamental difference between me and my critics.” Despite the thousands of years of evolution hard-wired into his brain, he resists the urge to see the world in linear fashion. That’s why he’s bullish on solar power, artificial intelligence, nanobots and 3-D printing. That’s why he believes the 2020s will be studded with one huge medical breakthrough after another.

“There’s a lot of pessimism in the world,” he laments. “If I  believed progress was linear, I’d be pessimistic, too. Because we would not be able to solve these problems. But I’m optimistic—more than optimistic: I believe we will solve these problems because of the scale of these technologies.”

He looks down at his watch yet again. Mickey Mouse peeks out from behind the timepiece’s sweeping hands. “Just a bit of whimsy,” he says.

Nearly an hour has passed. The world has changed. It’s time to get on with his day.

Post date:

Oct 9, 2014

This article can also be found at

Transhumans: Technology Powered Superhumans (Slideshow)

I found this presentation (Transhumans: Technology Powered Superhumans) on SlideShare.  Some of the pictures are a bit cheesy, but these 46 slides touch base on almost every category of transhumanism you can imagine.

Transhumans: Technology Powered Superhumans

Published on Nov 18, 2013

<div style=”margin-bottom:5px”> <strong> <a href=”//” title=”Transhumans: Technology Powered Superhumans” target=”_blank”>Transhumans: Technology Powered Superhumans</a> </strong> from <strong><a href=”//” target=”_blank”>Institute of Customer Experience</a></strong> </div>

Transhumanism is the belief or theory that the human race can evolve beyond its current physical and mental limitations by means of science and technology. The more we explored this subject, the more we got fascinated to see how people are riding on the current era technologies to surpass the capabilities of human body. If the current explorations in transhumanism are anything to go by, then, we believe the future will be very exciting!

In this report we explore the various technologies, people involved and the advancements made in the field of Transhumanism. We would love to hear your feedback, comments and suggestions. Please mail us at

Published in: Design, Technology, Spiritual


The original presentation can be found at

Just a Another Definition of “The Singularity”

There are plenty of definitions of the singularity out there and I don’t plan to post any more of these, but I thought this one (from WhatIs)was worth having on Dawn of Giants.  

Singularity (the)

Part of the Nanotechnology glossary:

The Singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible for humans. Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of superintelligence to humans and/or human problems, including poverty, disease and mortality.

Revolutions in genetics, nanotechnology and robotics (GNR) in the first half of the 21stcentury are expected to lay the foundation for the Singularity. According to Singularity theory, superintelligence will be developed by self-directed computers and will increase exponentially rather than incrementally.

Lev Grossman explains the prospective exponential gains in capacity enabled by superintelligent machines in an article in Time:

“Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn’t even take breaks to play Farmville.”

Proposed mechanisms for adding superintelligence to humans include brain-computer interfaces, biological alteration of the brain, artificial intelligence (AI) brain implants and genetic engineering. Post-singularity, humanity and the world would be quite different.  A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Futurists such as Ray Kurzweil (author of The Singularity is Near) have predicted that in a post-Singularity world, humans would typically live much of the time in virtual reality — which would be virtually indistinguishable from normal reality. Kurzweil predicts, based on mathematical calculations of exponential technological development, that the Singularity will come to pass by 2045.

Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain isanalog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. Some theorists also point out that the Singularity may not even be desirable from a human perspective because there is no reason to assume that a superintelligence would see value in, for example, the continued existence or well-being of humans.

Science-fiction writer Vernor Vinge first used the term the Singularity in this context in the 1980s, when he used it in reference to the British mathematician I.J. Good’s concept of an “intelligence explosion” brought about by the advent of superintelligent machines. The term is borrowed from physics; in that context a singularity is a point where the known physical laws cease to apply.


This article can also be found at

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years from SingularityHUB

This is an article from SingularityHub called, “Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years.”  For those of you already familiar with Ray Kurzweil, you’ve probably heard all this before, but this is a great introduction to his work if you are not already familiar with it.

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years

253337 39

In my new book BOLD, one of the interviews that I’m most excited about is with my good friend Ray Kurzweil.

Bill Gates calls Ray, “the best person I know at predicting the future of artificial intelligence.” Ray is also amazing at predicting a lot more beyond just AI.

This post looks at his very incredible predictions for the next 20+ years.

Ray Kurzweil.

So who is Ray Kurzweil?

He has received 20 honorary doctorates, has been awarded honors from three U.S. presidents, and has authored 7 books (5 of which have been national bestsellers).

He is the principal inventor of many technologies ranging from the first CCD flatbed scanner to the first print-to-speech reading machine for the blind. He is also the chancellor and co-founder of Singularity University, and the guy tagged by Larry Page to direct artificial intelligence development at Google.

In short, Ray’s pretty smart… and his predictions are amazing, mind-boggling, and important reminders that we are living in the most exciting time in human history.

But, first let’s look back at some of the predictions Ray got right.

Predictions Ray has gotten right over the last 25 years

In 1990 (twenty-five years ago), he predicted…

…that a computer would defeat a world chess champion by 1998. Then in 1997, IBM’s Deep Blue defeated Garry Kasparov.

… that PCs would be capable of answering queries by accessing information wirelessly via the Internet by 2010. He was right, to say the least.

… that by the early 2000s, exoskeletal limbs would let the disabled walk. Companies like Ekso Bionics and others now have technology that does just this, and much more.

In 1999, he predicted…

… that people would be able talk to their computer to give commands by 2009. While still in the early days in 2009, natural language interfaces like Apple’s Siri and Google Now have come a long way. I rarely use my keyboard anymore; instead I dictate texts and emails.

… that computer displays would be built into eyeglasses for augmented reality by 2009. Labs and teams were building head mounted displays well before 2009, but Google started experimenting with Google Glass prototypes in 2011. Now, we are seeing an explosion of augmented and virtual reality solutions and HMDs. Microsoft just released the Hololens, and Magic Leap is working on some amazing technology, to name two.

In 2005, he predicted…

… that by the 2010s, virtual solutions would be able to do real-time language translation in which words spoken in a foreign language would be translated into text that would appear as subtitles to a user wearing the glasses. Well, Microsoft (via Skype Translate), Google (Translate), and others have done this and beyond. One app called Word Lens actually uses your camera to find and translate text imagery in real time.

Ray’s predictions for the next 25 years

The above represent only a few of the predictions Ray has made.

While he hasn’t been precisely right, to the exact year, his track record is stunningly good.

Here are some of my favorite of Ray’s predictions for the next 25+ years.

If you are an entrepreneur, you need to be thinking about these. Specifically, how are you going to capitalize on them when they happen? How will they affect your business?

By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power (roughly the same as the human brain) will cost about $1,000.

By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.

By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.

By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.

By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

I want to make an important point.

It’s not about the predictions.

It’s about what the predictions represent.

Ray’s predictions are a byproduct of his (and my) understanding of the power of Moore’s Law, more specifically Ray’s “Law of Accelerating Returns” and of exponential technologies.

These technologies follow an exponential growth curve based on the principle that the computing power that enables them doubles every two years.


As humans, we are biased to think linearly.

As entrepreneurs, we need to think exponentially.

I often talk about the 6D’s of exponential thinking

Most of us can’t see the things Ray sees because the initial growth stages of exponential, DIGITIZED technologies are DECEPTIVE.

Before we know it, they are DISRUPTIVE—just look at the massive companies that have been disrupted by technological advances in AI, virtual reality, robotics, internet technology, mobile phones, OCR, translation software, and voice control technology.

Each of these technologies DEMATERIALIZED, DEMONETIZED, and DEMOCRATIZED access to services and products that used to be linear and non-scalable.

Now, these technologies power multibillion-dollar companies and affect billions of lives.

Image Credit:; Singularity University; Ray Kurzweil and Kurzweil Technologies, Inc./Wikimedia Commons

This article can also be found at

The Singularity Isn’t Near by Paul Allen

This is a piece written by Paul Allen in which he presents his reasons for thinking a singularity will not occur until after 2045.  While I humbly disagree with some of Paul Allen’s assertions in this article, I must say that I respect Allen for admitting that “we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion.”  I also think he makes a salient point (and I am extrapolating this notion based on this article) about needing to have a more complete understanding of cognition before we can really delve into the science of creating a mind from scratch, so to speak.  

Then again, Ray Kurzweil now has the inconceivable resources and support of Google at his fingertips in order to accelerate his own research.  

One other thing I would like to address; in this article, Allen’s main premise is that the exponential growth in technology, which we have witnessed in the past, may not be as stable as many singularitarians would have you believe.  I can respect this view, but I would be remiss if I didn’t point out that Allen’s premise could work in the opposite direction just as easily.  Take the D-Wave quantum computer, for instance.  This computer represents a dramatic leap* forward in technological innovation which could actually compound the Law of Accelerating Returns beyond even its’ current exponential expansion.

*I refrain from using the obvious pun, quantum leap, when describing the D-Wave computer because by definition a quantum leap would actually be the smallest amount of progress one could conceivably make.  I heard that somewhere and thought it was amusing enough to repeat…

Credit: Technology Review


Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they’ll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It’s heady stuff.

While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming. Kurzweil disagrees, based on his extrapolations about the rate of relevant scientific and technical progress. He reasons that the rate of progress toward the singularity isn’t just a progression of steadily increasing capability, but is in fact exponentially accelerating—what Kurzweil calls the “Law of Accelerating Returns.” He writes that:

So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate). The “returns,” such as chip speed and cost-effectiveness, also increase exponentially. There’s even exponential growth in the rate of exponential growth. Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity … [1]

By working through a set of models and historical data, Kurzweil famously calculates that the singularity will arrive around 2045.

This prediction seems to us quite far-fetched. Of course, we are aware that the history of science and technology is littered with people who confidently assert that some event can’t happen, only to be later proven wrong—often in spectacular fashion. We acknowledge that it is possible but highly unlikely that Kurzweil will eventually be vindicated. An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.

Kurzweil’s reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. They are assertions about how past rates of scientific and technical progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these “laws” will work until they don’t. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in a computer’s hardware technologies (memory, processing power, bus speed, etc.) but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.

This prior need to understand the basic science of cognition is where the “singularity is near” arguments fail to persuade us. It is true that computer hardware technology can develop amazingly quickly once we have a solid scientific framework and adequate economic incentives. However, creating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moore’s Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought. Getting this kind of comprehensive understanding of the brain is not impossible. If the singularity is going to occur on anything like Kurzweil’s timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.

But history tells us that the process of original scientific discovery just doesn’t behave this way, especially in complex areas like neuroscience, nuclear fusion, or cancer research. Overall scientific progress in understanding the brain rarely resembles an orderly, inexorable march to the truth, let alone an exponentially accelerating one. Instead, scientific advances are often irregular, with unpredictable flashes of insight punctuating the slow grind-it-out lab work of creating and testing theories that can fit with experimental observations. Truly significant conceptual breakthroughs don’t arrive when predicted, and every so often new scientific paradigms sweep through the field and cause scientists to reëvaluate portions of what they thought they had settled. We see this in neuroscience with the discovery of long-term potentiation, the columnar organization of cortical areas, and neuroplasticity. These kinds of fundamental shifts don’t support the overall Moore’s Law-style acceleration needed to get to the singularity on Kurzweil’s schedule.

The Complexity Brake

The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings. We believe that one day this steady increase in complexity will end—the brain is, after all, a finite set of neurons and operates according to physical principles. But for the foreseeable future, it is the complexity brake and arrival of powerful new theories, rather than the Law of Accelerating Returns, that will govern the pace of scientific progress required to achieve the singularity.

So, while we think a fine-grained understanding of the neural structure of the brain is ultimately achievable, it has not shown itself to be the kind of area in which we can make exponentially accelerating progress. But suppose scientists make some brilliant new advance in brain scanning technology. Singularity proponents often claim that we can achieve computer intelligence just by numerically simulating the brain “bottom up” from a detailed neural-level picture. For example, Kurzweil predicts the development of nondestructive brain scanners that will allow us to precisely take a snapshot a person’s living brain at the subneuron level. He suggests that these scanners would most likely operate from inside the brain via millions of injectable medical nanobots. But, regardless of whether nanobot-based scanning succeeds (and we aren’t even close to knowing if this is possible), Kurzweil essentially argues that this is the needed scientific advance that will gate the singularity: computers could exhibit human-level intelligence simply by loading the state and connectivity of each of a brain’s neurons inside a massive digital brain simulator, hooking up inputs and outputs, and pressing “start.”

However, the difficulty of building human-level software goes deeper than computationally modeling the structural connections and biology of each of our neurons. “Brain duplication” strategies like these presuppose that there is no fundamental issue in getting to human cognition other than having sufficient computer power and neuron structure maps to do the simulation.[2] While this may be true theoretically, it has not worked out that way in practice, because it doesn’t address everything that is actually needed to build the software. For example, if we wanted to build software to simulate a bird’s ability to fly in various conditions, simply having a complete diagram of bird anatomy isn’t sufficient. To fully simulate the flight of an actual bird, we also need to know how everything functions together. In neuroscience, there is a parallel situation. Hundreds of attempts have been made (using many different organisms) to chain together simulations of different neurons along with their chemical environment. The uniform result of these attempts is that in order to create an adequate simulation of the real ongoing neural activity of an organism, you also need a vast amount of knowledge about thefunctional role that these neurons play, how their connection patterns evolve, how they are structured into groups to turn raw stimuli into information, and how neural information processing ultimately affects an organism’s behavior. Without this information, it has proven impossible to construct effective computer-based simulation models. Especially for the cognitive neuroscience of humans, we are not close to the requisite level of functional knowledge. Brainsimulation projects underway today model only a small fraction of what neurons do and lack the detail to fully simulate what occurs in a brain. The pace of research in this area, while encouraging, hardly seems to be exponential. Again, as we learn more and more about the actual complexity of how the brain functions, the main thing we find is that the problem is actually getting harder.

The AI Approach

Singularity proponents occasionally appeal to developments in artificial intelligence (AI) as a way to get around the slow rate of overall scientific progress in bottom-up, neuroscience-based approaches to cognition. It is true that AI has had great successes in duplicating certain isolated cognitive tasks, most recently with IBM’s Watson system for Jeopardy! question answering. But when we step back, we can see that overall AI-based capabilities haven’t been exponentially increasing either, at least when measured against the creation of a fully general human intelligence. While we have learned a great deal about how to build individual AI systems that do seemingly intelligent things, our systems have always remained brittle—their performance boundaries are rigidly set by their internal assumptions and defining algorithms, they cannot generalize, and they frequently give nonsensical answers outside of their specific focus areas. A computer program that plays excellent chess can’t leverage its skill to play other games. The best medical diagnosis programs contain immensely detailed knowledge of the human body but can’t deduce that a tightrope walker would have a great sense of balance.

Why has it proven so difficult for AI researchers to build human-like intelligence, even at a small scale? One answer involves the basic scientific framework that AI researchers use. As humans grow from infants to adults, they begin by acquiring a general knowledge about the world, and then continuously augment and refine this general knowledge with specific knowledge about different areas and contexts. AI researchers have typically tried to do the opposite: they have built systems with deep knowledge of narrow areas, and tried to create a more general capability by combining these systems. This strategy has not generally been successful, although Watson’s performance on Jeopardy! indicates paths like this may yet have promise. The few attempts that have been made to directly create a large amount of general knowledge of the world, and then add the specialized knowledge of a domain (for example, the work ofCycorp), have also met with only limited success. And in any case, AI researchers are only just beginning to theorize about how to effectively model the complex phenomena that give human cognition its unique flexibility: uncertainty, contextual sensitivity, rules of thumb, self-reflection, and the flashes of insight that are essential to higher-level thought. Just as in neuroscience, the AI-based route to achieving singularity-level computer intelligence seems to require many more discoveries, some new Nobel-quality theories, and probably even whole new research approaches that are incommensurate with what we believe now. This kind of basic scientific progress doesn’t happen on a reliable exponential growth curve. So although developments in AI might ultimately end up being the route to the singularity, again the complexity brake slows our rate of progress, and pushes the singularity considerably into the future.

The amazing intricacy of human cognition should serve as a caution to those who claim the singularity is close. Without having a scientifically deep understanding of cognition, we can’t create the software that could spark the singularity. Rather than the ever-accelerating advancement predicted by Kurzweil, we believe that progress toward this understanding is fundamentally slowed by the complexity brake. Our ability to achieve this understanding, via either the AI or the neuroscience approaches, is itself a human cognitive act, arising from the unpredictable nature of human ingenuity and discovery. Progress here is deeply affected by the ways in which our brains absorb and process new information, and by the creativity of researchers in dreaming up new theories. It is also governed by the ways that we socially organize research work in these fields, and disseminate the knowledge that results. At Vulcanand at the Allen Institute for Brain Science, we are working on advanced tools to help researchers deal with this daunting complexity, and speed them in their research. Gaining a comprehensive scientific understanding of human cognition is one of the hardest problems there is. We continue to make encouraging progress. But by the end of the century, we believe, we will still be wondering if the singularity is near.

Paul G. Allen, who cofounded Microsoft in 1975, is a philanthropist and chairman of Vulcan, which invests in an array of technology, aerospace, entertainment, and sports businesses. Mark Greaves is a computer scientist who serves as Vulcan’s director for knowledge systems.

[1] Kurzweil, “The Law of Accelerating Returns,” March 2001.

[2] We are beginning to get within range of the computer power we might need to support this kind of massive brain simulation. Petaflop-class computers (such as IBM’s BlueGene/P that was used in the Watson system) are now available commercially. Exaflop-class computers are currently on the drawing boards. These systems could probably deploy the raw computational capability needed to simulate the firing patterns for all of a brain’s neurons, though currently it happens many times more slowly than would happen in an actual brain.

UPDATE: Ray Kurzweil responds here.

This article can also be found at