Don’t Fear Artificial Intelligence by Ray Kurzweil

This is an article from TIME by Ray Kurzweil called Don’t Fear Artificial Intelligence.  Basically, Kurzweil’s stance is that “technology is a double-edged sword” and that it always has been, but that’s no reason to abandon the research.  Kurzweil also states that, “Virtually every­one’s mental capabilities will be enhanced by it within a decade.”  I hope it makes people smarter and not just more intelligent! 


Don’t Fear Artificial Intelligence

Retro toy robot
Getty Images

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller “How to Create a Mind.”

Two great thinkers see danger in AI. Here’s how to make it safe.

Stephen Hawking, the pre-eminent physicist, recently warned that artificial intelligence (AI), once it sur­passes human intelligence, could pose a threat to the existence of human civilization. Elon Musk, the pioneer of digital money, private spaceflight and electric cars, has voiced similar concerns.

If AI becomes an existential threat, it won’t be the first one. Humanity was introduced to existential risk when I was a child sitting under my desk during the civil-­defense drills of the 1950s. Since then we have encountered comparable specters, like the possibility of a bioterrorist creating a new virus for which humankind has no defense. Technology has always been a double-edged sword, since fire kept us warm but also burned down our villages.

The typical dystopian futurist movie has one or two individuals or groups fighting for control of “the AI.” Or we see the AI battling the humans for world domination. But this is not how AI is being integrated into the world today. AI is not in one or two hands; it’s in 1 billion or 2 billion hands. A kid in Africa with a smartphone has more intelligent access to knowledge than the President of the United States had 20 years ago. As AI continues to get smarter, its use will only grow. Virtually every­one’s mental capabilities will be enhanced by it within a decade.

We will still have conflicts among groups of people, each enhanced by AI. That is already the case. But we can take some comfort from a profound, exponential decrease in violence, as documented in Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Has Declined. According to Pinker, although the statistics vary somewhat from location to location, the rate of death in war is down hundredsfold compared with six centuries ago. Since that time, murders have declined tensfold. People are surprised by this. The impression that violence is on the rise results from another trend: exponentially better information about what is wrong with the world—­another development aided by AI.

There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps a couple of decades ahead of AI. A meeting called the Asilomar ­Conference on Recombinant DNA was organized in 1975 to ­assess its potential dangers and devise a strategy to keep the field safe. The resulting guidelines, which have been revised by the industry since then, have worked very well: there have been no significant problems, accidental or intentional, for the past 39 years. We are now seeing major ad­vances in medical treatments reaching clinical practice and thus far none of the anticipated problems.

Consideration of ethical guidelines for AI goes back to Isaac Asimov’s three laws of robotics, which appeared in his short story “Runaround” in 1942, eight years before Alan Turing introduced the field of AI in his 1950 paper “Computing Machinery and Intelligence.” The median view of AI practitioners today is that we are still several decades from achieving human-­level AI. I am more optimistic and put the date at 2029, but either way, we do have time to devise ethical standards.

There are efforts at universities and companies to develop AI safety strategies and guidelines, some of which are already in place. Similar to the Asilomar guidelines, one idea is to clearly define the mission of each AI program and to build in encrypted safeguards to prevent unauthorized uses.

Ultimately, the most important approach we can take to keep AI safe is to work on our human governance and social institutions. We are already a human-­machine civilization. The best way to avoid destructive conflict in the future is to continue the advance of our social ideals, which has already greatly reduced violence.

AI today is advancing the diagnosis of disease, finding cures, developing renewable clean energy, helping to clean up the environment, providing high-­quality education to people all over the world, helping the disabled (including providing Hawking’s voice) and contributing in a myriad of other ways. We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realize this promise while controlling the peril. It won’t be the first time we’ve succeeded in doing this.

Kurzweil is the author of five books on artificial ­intelligence, including the recent New York Times best seller How to Create a Mind.


 

This article can also be found here.
 

 

Advertisements

Hugo de Garis – Singularity Skepticism (Produced by Adam Ford)

This is Hugo de Garis talking about why people tend to react with a great deal of skepticism.  To address the skeptics, de Garis explains Moore’s Law and goes into it’s many implications.  Hugo de Garis makes a statement toward the end about how people will begin to come around when they begin to see their household electronics getting smarter and smarter.


Runtime: 12:31


This video can also be found here and here.

Video Info:

Published on Jul 31, 2012

Hugo de Garis speaks about why people are skeptical about the possibility of machine intelligence, and also reasons for believing machine intelligence is possible, and quite probably will be an issue that we will need to face in the coming decades.

If the brain guys can copy how the brain functions closely enough…we will arrive at a machine based on neuroscience ideas and that machine will be intelligent and conscious

 

 

Ben Goertzel – Beginnings [on Artificial Intelligence – Thanks to Adam A. Ford for this video.]

In this video, Ben Goertzel talks a little about how he got into AGI research and about the research, itself.  I first heard of Ben Goertzel about four years ago, right when I was first studying computer science and considering a career in AI programming.  At the time, I was trying to imagine how you would build an emotionally intelligent machine.  I really enjoyed hearing some of his ideas at the time and still do.  Also at the time, I was listening to a lot of Tony Robbins so you could imagine, I came up with some pretty interesting theories on artificial intelligence and empathetic machines.  Maybe if I get enough requests I’ll write a special post on some of those ideas.  You just let me know if you’re interested.


Runtime: 10:33


This video can also be found at here and here.

Video Info:

Published on Jul 27, 2012

Ben Goertzel talks about his early stages in thinking about AI, and two books : The Hidden Pattern, and Building Better Minds.

The interview was done in Melbourne Australia while Ben was down to speak at the Singularity Summit Australia 2011.

http://2011.singularitysummit.com.au

Interviewed, Filmed & Edited by Adam A. Ford
http://goertzel.org

 

Peter Voss Interview on Artificial General Intelligence

This is an interview with Peter Voss of Optimal talking about artificial general intelligence.  One of the things Voss talks about is the skepticism which is a common reaction when talking about creating strong AI and why (as Tony Robbins always says) the past does not equal the future.  He also talks about why he thinks that Ray Kurzweil’s predictions that AGI won’t be achieved for another 20 is wrong – (and I gotta say, he makes a good point).  If you are interested in artificial intelligence or ethics in technology then you’ll want to watch this one…  

And don’t worry, the line drawing effect at the beginning of the video only lasts a minute.


Runtime: 39:55


This video can also be found at https://www.youtube.com/watch?v=4W_vtlSjNk0

Video Info:

Published on Jan 8, 2013

Peter Voss is the founder and CEO of Adaptive A.I. Inc, an R&D company developing a high-level general intelligence (AGI) engine. He is also founder and CTO of Smart Action Company LLC, which builds and supplies AGI-based virtual contact-center agents — intelligent, automated phone operators.

Peter started his career as an entrepreneur, inventor, engineer and scientist at age 16. After several years of experience in electronics engineering, at age 25 he started a company to provide advanced custom hardware and software solutions. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked on a broad range of disciplines — cognitive science, philosophy and theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving new breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., and last year founded Smart Action Company as its commercialization division.

Peter considers himself a free-minds-and-markets Extropian, and often writes and presents on philosophical topics including rational ethics, freewill and artificial minds. He is also deeply involved with futurism and life-extension.


http://www.optimal.org/peter/peter.htm

My main occupation is research in high-level, general (domain independent, autonomous) Artificial Intelligence — “Adaptive A.I. Inc.”

I believe that integrating insights from the following areas of cognitive science are crucial for rapid progress in this field:

Philosophy/ epistemology – understanding the true nature of knowledge
Cognitive psychology (incl. developmental & psychometric) for analysis of cognition – and especially – general conceptual intelligence.
Computer science – self-modifying systems, combining new connectionist pattern manipulation techniques with ‘traditional’ AI engineering.
Anyone who shares my passion – and/ or concerns – for this field is welcome to contact me for brainstorming and possible collaboration.

My other big passion is for exploring what I call Optimal Living: Maximizing both the quantity & quality of life. I see personal responsibility and optimizing knowledge acquisition as key. Specific interests include:

Rationality, as a means for knowledge. I’m largely sympathetic to the philosophy of Objectivism, and have done quite a bit of work on developing a rational approach to (personal & social) ethics.
Health (quality): physical, financial, cognitive, and emotional (passions, meaningful relationships, appreciation of art, etc.). Psychology: IQ & EQ.
Longevity (quantity): general research, CRON (calorie restriction), cryonics
Environment: economic, social, political systems conducive to Optimal Living.
These interests logically lead to an interest in Futurism , in technology for improving life – overcoming limits to personal growth & improvement. The transhumanist philosophy of Extropianism best embodies this quest. Specific technologies that seem to hold most promise include AI, Nanotechnology, & various health & longevity approaches mentioned above.

I always enjoy meeting new people to explore ideas, and to have my views critiqued. To this end I am involved in a number of discussion groups and salons (e.g. ‘Kifune’ futurist dinner/discussion group). Along the way I’m trying to develop and learn the complex art of constructive dialog.

Interview done at SENS party LA 20th Dec 2012.

 

 

Sean O’Heigeartaigh – Interview at Oxford Future of Humanity Institute (on Artificial Intelligence)

Here is a video interview with Sean O’Heigeartaigh.  O’Heigeartaigh speaks on the ethics of artificial intelligence, the technological singularity, augmented reality… he covers a lot of ground.  The video is called Sean O’Heigeartaigh – Interview at Oxford Future of Humanity Institute and it’s worth the watch.


 

Runtime: 47:01


This video can also be found at https://www.youtube.com/watch?v=cY90WIIrrlo 

Video Info:

Published on Jan 24, 2013

Dr Sean O hEigeartaigh
James Martin Academic Project Manager with the Oxford Martin Programme on the Impacts of Future Technology

Seán has a background in genetics, having recently finished his phD in molecular evolution in Trinity College Dublin where he focused on programmed ribosomal frameshifting and comparative genomic approaches to improve genome annotation. He is also the cofounder of a successful voluntary arts organisation in Ireland that now runs popular monthly events and an annual outdoor festival.

The Future of Humanity Institute is the leading research centre looking at big-picture questions for human civilization. The last few centuries have seen tremendous change, and this century might transform the human condition in even more fundamental ways. Using the tools of mathematics, philosophy, and science, we explore the risks and opportunities that will arise from technological change, weigh ethical dilemmas, and evaluate global priorities. Our goal is to clarify the choices that will shape humanity’s long-term future.

the Future of Humanity Institute: http://www.fhi.ox.ac.uk/

Humanity+ and the Upcoming Battle between Good and Evil by Jeanne Dietsch

This article from the humanity+ website (Humanity+ and the Upcoming Battle between Good and Evil) evaluates political stresses in light of transhumanism and the ever-nearing technological singularity.


 

Humanity+ and the Upcoming Battle between Good and Evil

obam and putin

Many transhumanists seek a better world, made possible through massively improved intellectual capacity, aka Humanity+.

Yet, though we have more power to achieve Good, we have no better understanding of Good than philosophers of millennia ago. If groups continue to gain power exponentially yet disagree on goals, the result might not be tranquility. So far, our super powers have heightened the potential for global destruction. The means to avoid war lies not in increasing the intelligence of our weaponry, but in taming the emotional, political and economic systems that feed its use. Will H+ really alter such psychological and social networks?

Will we finally be able to unite and collaborate toward a consensus goal?

Increased speed and capacity have demonstrably improved our ability to predict outcomes. Solving Texas Hold ‘em Poker is an impressive accomplishment. It suggests that once we decide on a goal, we will now be much more likely to discover the best way to achieve it, even if the path contains psychological bluffs and probability pitfalls.[i] With better speed, capacity and algorithms, our predictive and implementation powers grow.
Our goals, however, remain contentious. Each religious and philosophical in-group defines its own path to Good, Enlightenment or Heaven. To compress such variation into a single metric, some transhumanists propose sampling world populations or collecting a particularly enlightened group of religious and philanthropic leaders to create humanitarian norms that will be used to guide AGI behavior.

The latter was actually already accomplished on December 10, 1948, in response to the second World War. The drafters included Dr. Charles Malik (Lebanon), Alexandre Bogomolov (USSR), Dr. Peng-chun Chang (former Republic of China), René Cassin (France), Eleanor Roosevelt (US, Chair), Charles Dukes (United Kingdom), William Hodgson (Australia), Hernan Santa Cruz (Chile) and John P. Humphrey (Canada), with input from dozens of other representatives of nations as diverse as India and Iran.[ii]

The document is the United Nations Universal Declaration of Human Rights[iii]. Forty-eight nations with widely varying cultures signed this Declaration. However, even in the case of something so broadly accepted, even within the consensus-seeking environment following WWII, eight nations abstained from support: the Soviet Union and five affiliated nations, plus Saudi Arabia and apartheid South Africa. And, although the new People’s Republic of China joined the UN in 1971, it publicly and pointedly values economic progress over human rights, at least until it catches up to developed countries.[iv] Moreover, a number of its 1.3 billion citizens agree.

The point is that there is no coalescing consensus of what goals for humankind should be, even on something as basic as fundamental human rights. Conflict has been our past and will be our future. Some transhumanists talk about upcoming battles.

Hugo deGaris[v] expects conflict between “Terrans” who want to remain homo sapiens and “Cosmists” who expect AGI to replace humans, but how long will struggles last between those who welcome super powers and those who fight them? More likely, the long-term wars of the future will resemble those that ravage us now. Although many young educated adults believe their generation is more cosmopolitan, less nationalistic and more humanitarian, their counterparts are joining conservative, anti-immigration political movements, or even the murderous Islamic State! Do we really believe that only those with progressive Western values will control all H+’s underlying drives? And, if not, are we not arming the enemy at the same time we arm ourselves with greater intelligence?

But fear of misuse is almost never a reason not to pursue knowledge. Perhaps H+, with superior intelligence, will be able to decode the patterns of the Universe and finally explain to us why we are here. Perhaps these super beings will finally reach consensus on our goals?

The aspiration for such a superhuman race is not a recent dream. In fact, over a century ago, Nietzsche wrote, in Also Sprach Zarathustra, that the ultimate purpose of humankind was to create a being transcending human abilities, an ubermensch. While ubermensch is often translated into English as “super man”, it is actually much closer to the concept of H+. The ubermensch was a person above all weaker beings, an empiricist who gained knowledge from his senses just as H+ will gain knowledge from trillions of sensors. The ubermensch would not be constrained by religious truisms but understand Nature directly.

However, ubermensch and H+ differ in at least two ways. First, Nietzsche’s character denigrated Platonic concepts and other abstractions because he considered them removed from experience, whereas we now view conceptual hierarchies as the brain’s means to find pattern and thinking efficiently. We expect H+ to be able to abstract patterns in ways that will enable it to predict future developments far better than homo sapiens. Secondly, H+ differs from ubermensch in its attitude toward the body. Nietzsche saw the body as the essence of humankind. H+ hopes to escape it. In fact, the H+ holy grail of substrate-independent intelligence – uploading brains — very closely mirrors the Christian concept of a soul, the essence of a person that lives on after the body dies.

This other-worldly aspiration was anathema to Nietzsche at the time because it was not grounded in reality. Would he feel the same way today when physics has transformed much of the invisible to material? Perhaps not.

Regardless, is not the goal of transhumanists the creation of a new, ideal being that will understand its purpose better than we do? Are we not, in our struggle to bring meaning to our lives, setting the creation of H+ as a reason for humankind’s existence, for our own existence? In all honesty, are we really seeking something so different from what humans have sought for millennia: a reason, a cause, a goal for existence?

If so, we might also consider Nietzsche’s conclusion. Such goals are futile. Nietzsche viewed Darwinian evolution not as a march toward the ideal, but as a climb across ever-changing terrain. Nietzsche viewed creations as cyclic, or — as we might say today — fractal. From this perspective, creating an ubermensch will not lead to an idyllic existence; it will not stop our struggle; it will only transfer it to venues of a different scale: enormous gullies or minutest crevices. The only force that will stop us fighting among ourselves is a greater threat from beyond.

In fact, Nietzsche came to believe that it is the balancing of conflict with structure, chaos with art, and entropy with life that is each individual’s goal. When Maxwell’s demon opens the door and differences disappear into unchanging calmness, Life is over. Meanwhile, H+ will supersede homo sapiens, but only as one more level of being. We can evolve into ubermenschen, better suited than our hunter-gatherer-brained predecessors to live in today’s complexity, but H+ will not be perfect and will never be finished.

Our ultimate purpose will forever remain just out of sight, past the misty curve of hyperspace.

Screen Shot 2015-02-10 at 2.18.49 PM

References

[i] Bowling, Michael; Burch, Neil; Johanson, Michael; Tammelin, Oskari. (2015) Science (Washington, DC, United States) 347(6218), 145-149.[ii] The Drafters of the Universal Declaration of Human Rights. (2015) United Nations, New York, NY, US. http://www.un.org/en/documents/udhr/drafters.shtml[iii] United Nations Universal Declaration of Human Rights (1948), United Nations, New York, NY, US. http://www.un.org/en/documents/udhr/index.shtml[iv] Moore, Greg. (1999) China’s Cautious Participation in the UN Human Rights Regime, in A review of China, the United Nations, and Human Rights: The Limits of Compliance, editor, Ann Kent. Philadelphia: University of Pennsylvania Press.[v] De Garis, Hugo. (2013) “Will there be cyborgs?” Between Ape and Artilect: Conversations with Pioneers of Artificial General Intelligence and Other Transformative Technologies, editor, Ben Goertzel, Humanity+ Press, Los Angeles, CA.###

About the author

Jeanne Dietsch is a serial tech entrepreneur, Harvard graduate in sci-tech policy, group-thinking facilitator and founder of Sapiens Plurum, an advocacy organization looking out for the interests of humankind.

Jeanne Dietsch
Sapiens Plurum “The Wisdom of Many”

Blog: Saving Humankind-ness

jdietsch@post.harvard.edu


This article can also be found here.

 

What is Transhumanism? by Nick Bostrom at the World Transhumanist Association

What is transhumanism?  This part definition, part article on transhumanity is from the World Transhumanist Association website and was written by Nick Bostrom.


 

What is Transhumanism?

Over the past few years, a new paradigm for thinking about humankind’s future has begun to take shape among some leading computer scientists, neuroscientists, nanotechnologists and researchers at the forefront of technological development. The new paradigm rejects a crucial assumption that is implicit in both traditional futurology and practically all of today’s political thinking. This is the assumption that the “human condition” is at root a constant. Present-day processes can be fine-tuned; wealth can be increased and redistributed; tools can be developed and refined; culture can change, sometimes drastically; but human nature itself is not up for grabs.

This assumption no longer holds true. Arguably it has never been true. Such innovations as speech, written language, printing, engines, modern medicine and computers have had a profound impact not just on how people live their lives, but on who and what they are. Compared to what might happen in the next few decades, these changes may have been slow and even relatively tame. But note that even a single additional innovation as important as any of the above would be enough to invalidate orthodox projections of the future of our world.

“Transhumanism” has gained currency as the name for a new way of thinking that challenges the premiss that the human condition is and will remain essentially unalterable. Clearing away that mental block allows one to see a dazzling landscape of radical possibilities, ranging from unlimited bliss to the extinction of intelligent life. In general, the future by present lights looks very weird – but perhaps very wonderful – indeed.

Some of the possibilities that you will no doubt hear discussed in the coming years are quite extreme and sound like science-fiction. Consider the following:

bullet Superintelligent machines. Superintelligence means any form of artificial intelligence, maybe based on “self-taught” neural networks, that is capable of outclassing the best human brains in practically every discipline, including scientific creativity, practical wisdom, and social skills. Several commentators have argued that both the hardware and the software required for superintelligence might be developed in the first few decades of the next century. (See Moravec [1998] and Bostrom [1998].)
bullet Lifelong emotional well-being through re-calibration of the pleasure-centers. Even today, mild variants of sustainable euphoria are possible for a minority of people who respond especially well to clinical mood-brighteners (“antidepressants”). Pharmaceuticals currently under development promise to give an increasing number of “normal” people the choice of drastically reducing the incidence of negative emotions in their lives. In some cases, the adverse side-effects of the new agents are negligible. Whereas street drugs typically wreak havoc on the brain’s neurochemistry, producing a brief emotional “high” followed by a crash, modern clinical drugs may target with high specificity a given neurotransmitter or receptor subtype, thereby avoiding any negative effect on the subject’s cognitive faculties – (s)he won’t feel “drugged” – and enables a constant, indefinitely sustainable mood-elevation without being addictive. David Pearce [1997] advocates and predicts a post-Darwinian era in which all aversive experience will be replaced by gradients of pleasure beyond the bounds of normal human experience. As cleaner and safer mood-brighteners and gene-therapies become available, paradise-engineering may become a practicable possibility.
bullet Personality pills. Drugs and gene therapy will yield far more than shallow one-dimensional pleasure. They can also modify personality. They can help overcome shyness, eliminate jealousy (Kramer [1994]), increase creativity and enhance the capacity for empathy and emotional depth. Think of all the preaching, fasting and self-discipline that people have subjected themselves to throughout the ages in attempts to ennoble their character. Shortly it may become possible to achieve the same goals much more thoroughly by swallowing a daily cocktail pill.
bullet Space colonization. Today, space colonization is technologically feasible but prohibitively expensive. As costs decrease, it will become economically and politically possible to begin to colonize space. The thing to note is that once a single self-sustaining colony has been established, capable of sending out its own colonization probes, then an exponentially self-replicating process has been set in motion that is capable – without any further input from the planet Earth – of spreading out across the millions of stars in our galaxy and then to millions of other galaxies as well. Of course, this sequence of events will take an extremely long time on a human time-scale. But is interesting to notice how near we are to being able to initiate a chain of events that will have such momentous consequences as filling the observable universe with our descendants.
bullet Molecular nanotechnology. Nanotechnology is the hypothetical design and manufacture of machines to atomic-scale precision, including general-purpose “assemblers”, devices that can position atoms individually in order to build almost any chemically permitted matter-configuration for which we can give a detailed specification – including exact copies of themselves. An existence-proof of a limited form of nanotechnology is given by biology: the cell is a molecular self-replicator that can produce a broad range of proteins. But the part of design space that is accessible to present biological organisms is restricted by their evolutionary history, and is mostly confined to non-rigid carbon structures. Eric Drexler ([1988], [1992]) was the first person to analyze in detail the physical possibility of a practically universal molecular assembler. Once such a gadget exists, it would make possible dirt-cheap (but perfectly clean) production of almost any commodity, given a design-specification and the requisite input of energy and atoms. The bootstrap problem for nanotechnology – how to build this first assembler – is very hard to solve. Two approaches are currently pursued. One of them builds on what nature has achieved and seeks to use biochemistry to engineer new proteins that can serve as tools in further engineering efforts. The other attempts to build atomic structures from scratch, using proximal probes such as atomic-force microscopes to position atoms one-by-one on a surface. The two methods can potentially be used in conjunction. Much research is required before the physical possibility of Drexlerian nanotechnology can be turned into an actuality; it will certainly not happen in the next couple of years, but it might come about in the first few decades of the next century.
bullet Vastly extended life spans. It may prove feasible to use radical gene-therapy and other biological methods to block normal aging processes, and to stimulate rejuvenation and repair mechanisms indefinitely. It is also possible that nothing short of nanotechnology will do the trick. Meanwhile there are unproven and in some cases expensive hormone treatments that seem to have some effect on general vitality in elderly people, although as yet nothing has been shown to be more effective at life-extension than controlled caloric restriction.
bullet Extinction of intelligent life. The risks are as enormous as the potential benefits. In addition to dangers that are already recognized (though perhaps inadequately counteracted?), such as a major military, terrorist or accidental disaster involving nuclear, chemical, viral or bacteriological agents, the new technologies threaten dangers of a different order altogether. Nanotechnology, for example, could pose a terrible threat to our existence if obtained by some terrorist group before adequate defense systems have been developed. It is not even certain that adequate defense is possible. Perhaps in a nanotechnological world offense has a decisive intrinsic advantage over defense. Nor is it farfetched to assume that there are other risks that we haven’t yet been able to imagine.
bullet The interconnected world. Even in its present form, the Internet has an immense impact on some people’s lives. And its ramifications are just beginning to unfold. This is one area where radical change is quite widely perceived, and where media discussion has been extensive.
bullet Uploading of our consciousness into a virtual reality. If we could scan the synaptic matrix of a human brain and simulate it on a computer then it would be possible for us to migrate from our biological embodiments to a purely digital substrate (given certain philosophical assumptions about the nature of consciousness and personal identity). By making sure we always had back-up copies, we might then enjoy effectively unlimited life-spans. By directing the activation flow in the simulated neural networks, we could engineer totally new types of experience. Uploading, in this sense, would probably require mature nanotechnology. But there are less extreme ways of fusing the human mind with computers. Work is being done today on developing neuro/chip interfaces. The technology is still in its early stages; but it might one day enable us to build neuroprostheses whereby we could “plug in” to cyberspace. Even less speculative are various schemes for immersive virtual reality – for instance using head-mounted displays – that communicate with the brain via our natural sense organs.
bullet Reanimation of cryogenically-suspended patients. Persons frozen with today’s procedure can probably not be brought back to life with anything less than mature nanotechnology. Even if we could be absolutely sure that mature nanotechnology will one day be developed, there would still be no guarantee that the cryonics customer’s gamble would succeed – perhaps the beings of the future won’t be interested in reanimating present-day humans. Still, even a 5% or 10% chance of success could make anAlcor contract a rational option for people who can afford it and who place a great value on their continued personal existence. If reanimated, they might look forward to aeons of subjective life time under conditions of their own choosing.

These prospects might seem remote. Yet transhumanists think there is reason to believe that they might not be so far off as is commonly supposed. The Technology Postulate denotes the hypothesis that several of the items listed, or other changes that are equally profound, will become feasible within, say, seventy years (possibly much sooner). This is the antithesis of the assumption that the human condition is a constant. The Technology Postulate is often presupposed in transhumanist discussion. But it is not an article of blind faith; it’s a falsifiable hypothesis that is argued for on specific scientific and technological grounds.

If we come to believe that there are good grounds for believing that Technology Postulate is true, what consequences does that have for how we perceive the world and for how we spend our time? Once we start reflecting on the matter and become aware of its ramifications, the implications are profound.

From this awareness springs the transhumanist philosophy – and “movement”. For transhumanism is more than just an abstract belief that we are about to transcend our biological limitations by means of technology; it is also an attempt to re-evaluate the entire human predicament as traditionally conceived. And it is a bid to take a far-sighted and constructive approach to our new situation. A primary task is to provoke the widest possible discussion of these topics and to promote a better public understanding. The set of skills and competencies that are needed to drive the transhumanist agenda extend far beyond those of computer scientists, neuroscientists, software-designers and other high-tech gurus. Transhumanism is not just for brains accustomed to hard-core futurism. It should be a concern for our whole society.

The Foresight Institute is an excellent source of information about nanotechnology-related issues. They organize annual conferences and have built up a substantial infrastructure of expertise in nanotechnology. The Extropy Institute has organized several international conferences on general transhumanist themes, and its president Max More has done much to get extropian memes out into the mass media. (Extropianism is a distinctive type transhumanism, defined by the Extropian Principles.) In 1997, the World Transhumanist Association was founded, with the aim of turning transhumanism into a mainstream academic discipline and also to facilitate networking between different transhumanist groups and local chapters and among individual transhumanists, both academic and non-academic. The WTA publishes the electronic Journal of Transhumanism, featuring leading-edge research papers by scholars working in transhumanist-related disciplines. The WTA web pages are one good starting place to find out more about transhumanism.

It is extremely hard to anticipate the long-term consequences of our present actions. But rather than sticking our heads in the sand, transhumanists reckon we should at least try to plan for them as best we can. In doing so, it becomes necessary to confront some of the notorious “big questions”, such the so-called Fermi paradox (“Why haven’t we seen any signs of intelligent extraterrestrial life?”). This problem requires delving into a number of different scientific disciplines. The Fermi paradox is not only intellectually stimulating, it is also potentially practically important since it could turn out to have consequences for whether we should expect to survive and colonize the universe (Hanson [1996]). At the present, though, it appears that the state of evolutionary biology is insufficiently advanced to allow us to draw any firm conclusions about our own future from this type of consideration. Another purported indirect source of information about our own future is the highly controversial Carter-Leslie Doomsday argument. This attempts to prove from basic principles of probability theory together with some trivial empirical assumptions that human extinction in the next century is much more likely than has previously been thought. The argument, which uses a version of the Anthropic Principle, was first conceived by astrophysicist Brandon Carter and was later developed by philosopher John Leslie [1996] and others. So far, nobody has been able to explain to general satisfaction what, if anything, is wrong with it (Bostrom [1998]).

While the wider perspective and the bigger questions are essential to transhumanism, that does not mean that transhumanists do not take an intense interest in what goes in our world today. On the contrary! Recent topical themes that have been the subject of wide and lively debate in transhumanist forums include such diverse issues as cloning; proliferation of weapons of mass-destruction; neuro/chip interfaces; psychological tools such as critical thinking skills, NLP, and memetics; processor technology and Moore’s law; gender roles and sexuality; neural networks and neuromorphic engineering; life-extension techniques such as caloric restriction; PET, MRI and other brain-scanning methods; evidence(?) for life on Mars; transhumanist fiction and films; quantum cryptography and “teleportation”; the Digital Citizen; atomic force microscopy as a possible enabling technology for nanotechnology; electronic commerce… Not all participants are equally at home in all of these fields, of course, but many like the experience of taking part in a joint exploration of unfamiliar ideas, facts and standpoints.

An important transhumanist goal is to improve the functioning of human society as an epistemic community. In addition to trying to figure out what is happening, we can try to figure out ways of making ourselves better at figuring out what is happening. We can create institutions that increase the efficiency of the academic- and other knowledge-communities. More and more people are gaining access to the Internet. Programmers, software designers, IT consultants and others are involved in projects that are constantly increasing the quality and quantity of advantages of being connected. Hypertext publishing and the collaborative information filtering paradigm (Chislenko [1997]) have the potential to accelerate the propagation of valuable information and aid the demolition of what transpire to be misconceptions and crackpot claims. The people working in information technology are only the latest reinforcement to the body of educators, scientists, humanists, teachers and responsible journalists who have been striving throughout the ages to decrease ignorance and make humankind as a whole more rational.

One simple but brilliant idea, developed by Robin Hanson [1990], is that we create a market of “idea futures”. Basically, this means that it would be possible to place bets on all sorts of claims about controversial scientific and technological issues. One of the many benefits of such an institution is that it would provide policy-makers and others with consensus estimates of the probabilities of uncertain hypotheses about projected future events, such as when a certain technological breakthrough will occur. It would also offer a decentralized way of providing financial incentives for people to make an effort to be right in what they think. And it could promote intellectual sincerity in that persons making strong claims would be encouraged to put their money where their mouth is. At present, the idea is embodied in an experimental set-up, the Foresight Exchange, where people can stake “credibility points” on a variety of claims. But for its potential advantages to materialize, a market has to be created that deals in real money and is as integrated in the established economic structure as are current stock exchanges. (Present anti-gambling regulations are one impediment to this; in many countries betting on anything other than sport and horses is prohibited.)

The transhumanist outlook can appear cold and alien at first. Many people are frightened by the rapid changes they are witnessing and respond with denial or by calling for bans on new technologies. It’s worth recalling how pain relief at childbirth through the use of anesthetics was once deplored as unnatural. More recently, the idea of “test-tube babies” has been viewed with abhorrence. Genetic engineering is widely seen as interfering with God’s designs. Right now, the biggest moral panic is cloning. We have today a whole breed of well-meaning biofundamentalists, religious leaders and so-called ethical experts who see it as their duty to protect us from whatever “unnatural” possibilities that don’t fit into their preconceived world-view. The transhumanist philosophy is a positive alternative to this ban-the-new approach to coping with a changing world. Instead of rejecting the unprecedented opportunities on offer, it invites us to embrace them as vigorously as we can. Transhumanists view technological progress as a joint human effort to invent new tools that we can use to reshape the human condition and overcome our biological limitations, making it possible for those who so want to become “post-humans”. Whether the tools are “natural” or “unnatural” is entirely irrelevant.

Transhumanism is not a philosophy with a fixed set of dogmas. What distinguishes transhumanists, in addition to their broadly technophiliac values, is the sort of problems they explore. These include subject matter as far-reaching as the future of intelligent life, as well as much more narrow questions about present-day scientific, technological or social developments. In addressing these problems, transhumanists aim to take a fact-driven, scientific, problem-solving approach. They also make a point of challenging holy cows and questioning purported impossibilities. No principle is beyond doubt, not the necessity of death, not our confinement to the finite resources of planet Earth, not even transhumanism itself is held to be too good for constant critical reassessment. The ideology is meant to evolve and be reshaped as we move along, in response to new experiences and new challenges. Transhumanists are prepared to be shown wrong and to learn from their mistakes.

Transhumanism can also be very practical and down-to-earth. Many transhumanists find ways of applying their philosophy to their own lives, ranging from the use of diet and exercise to improve health and life-expectancy; to signing up for cryonic suspension; making money from investing in technology stocks; creating transhumanist art; using clinical drugs to adjust parameters of mood and personality; applying various psychological self-improvement techniques; and in general taking steps to live richer and more responsible lives. An empowering mind-set that is common among transhumanists is dynamic optimism: the attitude that desirable results can in general be accomplished, but only through hard effort and smart choices (More [1997]).

Are you a transhumanist? If so, then you can look forward to increasingly seeing your own views reflected in the media and in society. For it is clear that transhumanism is an idea whose time has come.

Nick Bostrom
Department of Philosophy, Logic and Scientific method
London School of Economics
nick@nickbostrom.com

References

Bostrom, N. 1998. “How long before superintelligence?” International Journal of Futures Studies, 2. (Also available at http://www.hedweb.com/nickb/superintelligence.htm)

Bostrom, N. 1998. “Investigations into the Doomsday Argument”
http://www.anthropic-principle.com/preprints/inv/investigations.html

Bostrom, N. 1997. “The Fermi Paradox”
http://www.ndirect.co.uk/~transhumanism/Fermi.htm

Chislenko, A. 1997. “Collaborative Information Filtering” http://www.lucifer.com/~sasha/articles/ACF.html

Drexler, E. 1992. Nanosystems. John Wiley & Sons, New York.

Drexler, E. 1988. Engines of Creation: The Coming Era of Nanotechnology. Fourth Estate. London. http://www.foresight.org/EOC/index.html

Hanson, R. 1996. “The Great Filter: Are we almost past it?”
http://hanson.berkeley.edu/

Kramer, P. 1994. Listning to Prozac. Penguin. U.S.A.

Leslie, J. 1996. The End of the World: The Ethics and Science of Human Extinction. Routledge, New York.

More, M. 1997. “The Extropian Principles”
http://www.extropy.com/~exi/extprn26.htm

More, M. 1995. “Dynamic optimism: Epistemological Psychology for Extropians”
http://www.primenet.com/~maxmore/optimism.htm

Moravec, H. 1998. Robot, Being: mere machine to transcendent mind. Oxford Univ. Press.

Pearce, D. 1997. “The Hedonistic Imperative”.
http://www.hedweb.com/hedab.htm

Institutes

Extropy Institute
http://www.extropy.org/

Foresight Exchange
http://www.ideosphere.com/fx/main.html

Foresight Institute
http://www.foresight.org/

World Transhumanist Association
http://www.transhumanism.com/

I am grateful to David Pearce and Anders Sandberg for extensive comments on earlier versions of this text. N. B.

 


This article can also be found here.

 

PostHuman: An Introduction to Transhumanism from the British Institute of Posthuman Studies

This video by the British Institute of Posthuman Studies explores three factors of transhumanism; super longevity, super intelligence, and super well-being.  Its called PostHuman: An Introduction to Transhumanism and it’s a great video to show your friends who have never heard of transhumanism or the technological singularity.  


Runtime: 11:11


This video can also be found at https://www.youtube.com/watch?v=bTMS9y8OVuY

Video Info:

Published on Nov 5, 2013

We investigate three dominant areas of transhumanism: super longevity, super intelligence and super wellbeing, and briefly cover the ideas of thinkers Aubrey de Grey, Ray Kurzweil and David Pearce.

Official Website: http://biops.co.uk
Facebook: https://www.facebook.com/biopsuk
Twitter: https://twitter.com/biopsuk
Google+: http://gplus.to/biops

Written by: Peter Brietbart and Marco Vega
Animation & Design Lead: Many Artists Who Do One Thing (Mihai Badic)
Animation Script: Mihai Badic and Peter Brietbart
Narrated by: Holly Hagan-Walker
Music and SFX: Steven Gamble
Design Assistant: Melita Pupsaite
Additional Animation: Nicholas Temple
Other Contributors: Callum Round, Asifuzzaman Ahmed, Steffan Dafydd, Ben Kokolas, Cristopher Rosales
Special Thanks: David Pearce, Dino Kazamia, Ana Sandoiu, Dave Gamble, Tom Davis, Aidan Walker, Hani Abusamra, Keita Lynch

 

From the Human Brain to the Global Brain by Marios Kyriazis

This paper (From the Human Brain to the Global Brain by Marios Kyriazis) talks about brain augmentation and the possible (probable?) emergence of a global brain.  This is actually a concept which is quite familiar to me because it is the backdrop to a science fiction novel (possibly series) I’ve been writing in my spare time – limited as that may be, but more on that another time.  I’d just like to point out (and I know I’m not the first) that we already have the framework (the internet) for a rudimentary global brain.  Really, all it lacks is sophistication.


 

From the Human Brain to the Global Brain

Introduction

Human intelligence (i.e., the ability to consistently solve problems successfully) has evolved through the need to adapt to changing environments. This is not only true of our past but also of our present. Our brain faculties are becoming more sophisticated by cooperating and interacting with technology, specifically digital communication technology (Asaro, 2008).

When we consider the matter of brain function augmentation, we take it for granted that the issue refers to the human brain as a distinct organ. However, as we live in a complex technological society, it is now becoming clear that the issue is much more complicated. Individual brains cannot simply be considered in isolation, and their function is no longer localized or contained within the cranium, as we now know that information may be transmitted directly from one brain to another (Deadwyler et al., 2013; Pais-Vieira et al., 2013). This issue has been discussed in detail and attempts have been made to study the matter within a wider and more global context (Nicolelis and Laporta, 2011). Recent research in the field of brain to brain interfaces has provided the basis for further research and formation of new hypotheses in this respect (Grau et al., 2014; Rao et al., 2014). This concept of rudimentary “brain nets” may be expanded in a more global fashion, and within this framework, it is possible to envisage a much bigger and abstract “meta-entity” of inclusive and distributed capabilities, called the Global Brain (Mayer-Kress and Barczys, 1995;Heylighen and Bollen, 1996;Johnson et al., 1998; Helbing, 2011; Vidal, in press).

This entity reciprocally feeds information back to its components—the individual human brains. As a result, novel and hitherto unknown consequences may materialize such as, for instance, the emergence of rudimentary global “emotion” (Garcia and Tanase, 2013; Garcia et al., 2013; Kramera et al., 2014), and the appearance of decision-making faculties (Rodriguez et al., 2007). These characteristics may have direct impact upon our biology (Kyriazis, 2014a). This has been long discussed in futuristic and sociology literature (Engelbart, 1988), but now it also becomes more relevant to systems neuroscience partly because of the very promising research in brain-to-brain interfaces. The concept is grounded on scientific principles (Last, 2014a) and mathematical modeling (Heylighen et al., 2012).

Augmenting Brain Function on a Global Scale

It can be argued that the continual enhancement of brain function in humans, i.e., the tendency to an increasing intellectual sophistication, broadly aligns well with the main direction of evolution (Steward, 2014). This tendency to an increasing intellectual sophistication also obeys Ashby’s Law of Requisite Variety (Ashby, 1958) which essentially states that, for any system to be stable, the number of states of its control mechanisms must be greater than the number of states in the system being controlled. This means that, within an ever-increasing technological environment, we must continue to increase our brain function (mostly through using, or merging with, technology such as in the example of brain to brain communication mentioned above), in order to improve integration and maintain stability of the wider system. Several other authors (Maynard Smith and Szathmáry, 1997;Woolley et al., 2010; Last, 2014a) have expanded on this point, which seems to underpin our continual search for brain enrichment.

The tendency to enrich our brain is an innate characteristic of humans. We have been trying to augment our mental abilities, either intentionally or unintentionally, for millennia through the use of botanicals and custom-made medicaments, herbs and remedies, and, more recently, synthetic nootropics and improved ways to assimilate information. Many of these methods are not only useful in healthy people but are invaluable in age-related neurodegenerative disorders such as dementia and Parkinson’s disease (Kumar and Khanum, 2012). Other neuroscience-based methods such as transcranial laser treatments and physical implants (such as neural dust nanoparticles) are useful in enhancing cognition and modulate other brain functions (Gonzalez-Lima and Barrett, 2014).

However, these approaches are limited to the biological human brain as a distinct agent. As shown by the increased research interest in brain to brain communication (Trimper et al., 2014), I argue that the issue of brain augmentation is now embracing a more global aspect. The reason is the continual developments in technology which are changing our society and culture (Long, 2010). Certain brain faculties that were originally evolved for solving practical physical problems have been co-opted and exapted for solving more abstract metaphors, making humans adopt a better position within a technological niche.

The line between human brain function and digital information technologies is progressively becoming indistinct and less well-defined. This blurring is possible through the development of new technologies which enable more efficient brain-computer interfaces (Pfurtscheller and Neuper, 2002), and recently, brain-to-brain interfaces (Grau et al., 2014).

We are now in a position expand on this emergent worldview and examine what trends of systems neuroscience are likely in the near-term future. Technology has been the main drive which brought us to the position we are in today (Henry, 2014). This position is the merging of the physical human brain abilities with virtual domains and automated web services (Kurzweil, 2009). Modern humans cannot purely be defined by their biological brain function. Instead, we are now becoming an amalgam of biological and virtual/digital characteristics, a discrete unit, or autonomous agent, forming part of a wider and more global entity (Figure 1).

global brain

Figure 1. Computer-generated image of internet connections world-wide (Global Brain). The conceptual similarities with the human brain are remarkable. Both networks exhibit a scale-free, fractal distribution, with some weakly-connected units, and some strongly-connected ones which are arranged in hubs of increasing functional complexity. This helps protect the constituents of the network against stresses. Both networks are “small worlds” which means that information can reach any given unit within the network by passing through only a small number of other units. This assists in the global propagation of information within the network, and gives each and every unit the functional potential to be directly connected to all others. Source: The Opte Project/Barrett Lyon. Used under the Creative Commons Attribution-Non-Commercial 4.0 International License.

Large Scale Networks and the Global Brain

The Global Brain (Heylighen, 2007; Iandoli et al., 2009; Bernstein et al., 2012) is a self-organizing system which encompasses all those humans who are connected with communication technologies, as well as the emergent properties of these connections. Its intelligence and information-processing characteristics are distributed, in contrast to that of individuals whose intelligence is localized. Its characteristics emerge from the dynamic networks and global interactions between its individual agents. These individual agents are not merely the biological humans but are something more complex. In order to describe this relationship further, I have introduced the notion of the noeme, an emergent agent, which helps formalize the relationships involved (Kyriazis, 2014a). The noeme is a combination of a distinct physical brain function and that of an “outsourced” virtual one. It is the intellectual “networked presence” of an individual within the GB, a meaningful synergy between each individual human, their social interactions and artificial agents, globally connected to other noemes through digital communications technology (and, perhaps soon, through direct brain to brain interfaces). A comparison can be made with neurons which, as individual discrete agents, form part of the human brain. In this comparison, the noemes act as the individual, information-sharing discrete agents which form the GB (Gershenson, 2011). The modeling of noemes helps us define ourselves in a way that strengthens our rational presence in the digital world. By trying to enhance our information-sharing capabilities we become better integrated within the GB and so become a valuable component of it, encouraging mechanisms active in all complex adaptive systems to operate in a way that prolongs our retention within this system (Gershenson and Fernández, 2012), i.e., prolongs our biological lifespan (Kyriazis, 2014b; Last, 2014b).

Discussion

This concept is a helpful way of interpreting the developing cognitive relationship between humans and artificial agents as we evolve and adapt to our changing technological environment. The concept of the noeme provides insights with regards to future problems and opportunities. For instance, the study of the function of the noeme may provide answers useful to biomedicine, by coopting laws applicable to any artificial intelligence medium and using these to enhance human health (Kyriazis, 2014a). Just as certain physical or pharmacological therapies for brain augmentation are useful in neurodegeneration in individuals, so global ways of brain enhancement are useful in a global sense, improving the function and adaptive capabilities of humanity as a whole. One way to augment global brain function is to increase the information content of our environment by constructing smart cities (Caragliu et al., 2009), expanding the notion of the Web of Things (Kamilaris et al., 2011), and by developing new concepts in educational domains (Veletsianos, 2010). This improves the information exchange between us and our surroundings and helps augment brain function, not just physically in individuals, but also virtually in society.

Practical ways for enhancing our noeme (i.e., our digital presence) include:

• Cultivate a robust social media base, in different forums.

• Aim for respect, esteem and value within your virtual environment.

• Increase the number of your connections both in virtual and in real terms.

• Stay consistently visible online.

• Share meaningful information that requires action.

• Avoid the use of meaningless, trivial or outdated platforms.

• Increase the unity of your connections by using only one (user) name for all online and physical platforms.

These methods can help increase information sharing and facilitate our integration within the GB (Kyriazis, 2014a). In a practical sense, these actions are easy to perform and can encompass a wide section of modern communities. Although the benefits of these actions are not well studied, nevertheless some initial findings appear promising (Griffiths, 2002; Granic et al., 2014).

Concluding Remarks

With regards to improving brain function, we are gradually moving away from the realms of science fiction and into the realms of reality (Kurzweil, 2005). It is now possible to suggest ways to enhance our brain function, based on novel concepts dependent not only on neuroscience but also on digital and other technology. The result of such augmentation does not only benefit the individual brain but can also improve all humanity in a more abstract sense. It improves human evolution and adaptation to new technological environments, and this, in turn, may have positive impact upon our health and thus longevity (Solman, 2012; Kyriazis, 2014c).

In a more philosophical sense, our progressive and distributed brain function amplification has begun to lead us toward attaining “god-like” characteristics (Heylighen, in press) particularly “omniscience” (through Google, Wikipedia, the semantic web, Massively Online Open Courses MOOCs—which dramatically enhance our knowledge base), and “omnipresence” (cloud and fog computing, Twitter, YouTube, Internet of Things, Internet of Everything). These are the result of the outsourcing of our brain capabilities to the cloud in a distributed and universal manner, which is an ideal global neural augmentation. The first steps have already been taken through brain to brain communication research. The concept of systems neuroscience is thus expanded to encompass not only the human nervous network but also a global network with societal and cultural elements.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgment

I thank the help and input of the reviewers, particularly the first one who has dedicated a lot of time into improving the paper.

References

Asaro, P. (2008). “From mechanisms of adaptation to intelligence amplifiers: the philosophy of W. Ross Ashby,” in The Mechanical Mind in History, eds M. Wheeler, P. Husbands, and O. Holland (Cambridge, MA: MIT Press), 149–184.

Google Scholar

Ashby, W. R. (1958). Requisite Variety and its implications for the control of complex systems. Cybernetica (Namur) 1, 2.

Bernstein, A., Klein, M., and Malone, T. W. (2012). Programming the Global Brain. Commun. ACM 55, 1. doi: 10.1145/2160718.2160731

CrossRef Full Text | Google Scholar

Caragliu, A., Del Bo, C., and Nijkamp, P. (2009). Smart Cities in Europe. Serie Research Memoranda 0048, VU University Amsterdam, Faculty of Economics, Business Administration and Econometrics.

Google Scholar

Deadwyler, S. A., Berger, T. W., Sweatt, A. J., Song, D., Chan, R. H., Opris, I., et al. (2013). Donor/recipient enhancement of memory in rat hippocampus. Front. Syst. Neurosci. 7:120. doi: 10.3389/fnsys.2013.00120

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Engelbart, D. C. (1988). A Conceptual Framework for the Augmentation of Man’s Intellect. Computer-Supported Cooperative Work. San Francisco, CA: Morgan Kaufmann Publishers Inc. ISBN: 0-93461-57-5

Garcia, D., Mavrodiev, P., and Schweitzer, F. (2013). Social Resilience in Online Communities: The Autopsy of Friendster. Available online at:http://arxiv.org/abs/1302.6109 (Accessed October 8, 2014).

Garcia, D., and Tanase, D. (2013). Measuring Cultural Dynamics Through the Eurovision Song Contest. Available online at: http://arxiv.org/abs/1301.2995 (Accessed October 8, 2014).

Gershenson, C. (2011). The sigma profile: a formal tool to study organization and its evolution at multiple scales.Complexity 16, 37–44. doi: 10.1002/cplx.20350

CrossRef Full Text | Google Scholar

Gershenson, C., and Fernández, N. (2012). Complexity and information: measuring emergence, self-organization, and homeostasis at multiple scales. Complexity 18, 29–44. doi: 10.1002/cplx.21424

CrossRef Full Text | Google Scholar

Gonzalez-Lima, F., and Barrett, D. W. (2014). Augmentation of cognitive brain function with transcranial lasers. Front. Syst. Neurosc. 8:36. doi: 10.3389/fnsys.2014.00036

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Granic, I., Lobel, A., and Engels, R. C. M. E. (2014). The Benefits of Playing Video Games. American Psychologist. Available online at:https://www.apa.org/pubs/journals/releases/amp-a0034857.pdf (Accessed October 5, 2014).

Grau, C., Ginhoux, R., Riera, A., Nguyen, T. L., Chauvat, H., Berg, M., et al. (2014). Conscious brain-to-brain communication in humans using non-invasive technologies. PLoS ONE 9:e105225. doi: 10.1371/journal.pone.0105225

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Griffiths, M. (2002). The educational benefits of videogames. Educ. Health 20, 47–51.

Pubmed Abstract | Pubmed Full Text | Google Scholar

Helbing, D. (2011). FuturICT-New Science and Technology to Manage Our Complex, Strongly Connected World. Available online at: http://arxiv.org/abs/1108.6131(Accessed November 6, 2014).

Henry, C. (2014). IT and the Legacy of Our Cultural Heritage EDUCAUSE Review, Vol. 49 (Louisville, CO: D. Teddy Diggs).

Heylighen, F., and Bollen, J. (1996). “The World-Wide Web as a Super-Brain: from metaphor to model,” in Cybernetics and Systems’ 96, ed R. Trappl (Vienna: Austrian Society For Cybernetics), 917–922.

Google Scholar

Heylighen, F. (2007). The Global Superorganism: an evolutionary-cybernetic model of the emerging network society. Soc. Evol. Hist. 6, 58–119

Google Scholar

Heylighen, F., Busseniers, E., Veitas, V., Vidal, C., and Weinbaum, D. R. (2012). Foundations for a Mathematical Model of the Global Brain: architecture, components, and specifications (No. 2012-05). GBI Working Papers. Available online at:http://pespmc1.vub.ac.be/Papers/TowardsGB-model.pdf (Accessed November 6, 2014).

Heylighen, F. (in press). “Return to Eden? promises and perils on the road to a global superintelligence,” in The End of the Beginning: Life, Society and Economy on the Brink of the Singularity, eds B. Goertzel and T. Goertzel.

Google Scholar

Johnson, N. L., Rasmussen, S., Joslyn, C., Rocha, L., Smith, S., and Kantor, M. (1998). “Symbiotic Intelligence: self-organizing knowledge on distributed networks driven by human interaction,” in Artificial Life VI, Proceedings of the Sixth International Conference on Artificial Life (Los Angeles, CA), 403–407.

Google Scholar

Iandoli, L., Klein, M., and Zollo, G. (2009). Enabling on-line deliberation and collective decision-making through large-scale argumentation: a new approach to the design of an Internet-based mass collaboration platform. Int. J. Decis. Supp. Syst. Technol. 1, 69–92 doi: 10.4018/jdsst.2009010105

CrossRef Full Text | Google Scholar

Kamilaris, A., Pitsillides, A., and Trifa, A. (2011). The Smart Home meets the Web of Things. Int. J. Ad Hoc Ubiquit. Comput. 7, 145–154. doi: 10.1504/IJAHUC.2011.040115

CrossRef Full Text | Google Scholar

Kramera, A. D., Guillory, J. E., and Hancock, J. T. (2014). Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks. Available online at:http://www.pnas.org/content/111/24/8788.full (Accessed October 10, 2014).

Kumar, G. P., and Khanum, F. (2012). Neuroprotective potential of phytochemicals. Pharmacogn Rev. 6, 81–90. doi: 10.4103/0973-7847.99898

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Kurzweil, R. (2005). The Singularity is Near: When Humans Transcend Biology. New York, NY: Penguin books-Viking Publisher. ISBN: 978-0-670-03384-3.

Google Scholar

Kurzweil, R. (2009). “The coming merging of mind and machine,” in Scientific American. Available online at:http://www.scientificamerican.com/article/merging-of-mind-and-machine/ (Accessed November 5, 2014).

Kyriazis, M. (2014a). Technological integration and hyper-connectivity: tools for promoting extreme human lifespans.Complexity. doi: 10.1002/cplx.21626

CrossRef Full Text

Kyriazis, M. (2014b). Reversal of informational entropy and the acquisition of germ-like immortality by somatic cells. Curr. Aging Sci. 7, 9–16. doi: 10.2174/1874609807666140521101102

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Kyriazis, M. (2014c). Information-Sharing, Adaptive Epigenetics and Human Longevity. Available online at:http://arxiv.org/abs/1407.6030 (Accessed October 8, 2014).

Last, C. (2014a). Global Brain and the future of human society. World Fut. Rev. 6, 143–150. doi: 10.1177/1946756714533207

CrossRef Full Text | Google Scholar

Last, C. (2014b). Human evolution, life history theory and the end of biological reproduction. Curr. Aging Sci. 7, 17–24. doi: 10.2174/1874609807666140521101610

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Long, S. M. (2010). Exploring Web 2.0: The Impact of Digital Communications Technologies on Youth Relationships and Sociability. Available online at:http://scholar.oxy.edu/cgi/viewcontent.cgi?article=1001&context=sociology_student(Accessed November 5, 2014).

Mayer-Kress, G., and Barczys, C. (1995). The global brain as an emergent structure from the Worldwide Computing Network, and its implications for modeling. Inform. Soc. 11, 1–27 doi: 10.1080/01972243.1995.9960177

CrossRef Full Text | Google Scholar

Maynard Smith, J., and Szathmáry, E. (1997). The Major Transitions in Evolution. Oxford: Oxford University Press.

Nicolelis, M., and Laporta, A. (2011). Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives. Times Books, Henry Hold, New York. ISBN: 0-80509052-5.

Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J., and Nicolelis, M. (2013). A brain-to-brain interface for real-time sharing of sensorimotor information. Sci. Rep. 3:1319. doi: 10.1038/srep01319

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Pfurtscheller, G., and Neuper, C. (2002). Motor imagery and direct brain-computer communication. Proc. IEEE 89, 1123–1134. doi: 10.1109/5.939829

CrossRef Full Text | Google Scholar

Rao, R. P. N., Stocco, A., Bryan, M., Sarma, D., and Youngquist, T. M. (2014). A direct brain-to-brain interface in humans.PLoS ONE 9:e111332. doi: 10.1371/journal.pone.0111332

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Rodriguez, M. A., Steinbock, D. J., Watkins, J. H., Gershenson, C., Bollen, J., Grey, V., et al. (2007). Smartocracy: Social Networks for Collective Decision Making (p. 90b). Los Alamitos, CA: IEEE Computer Society.

Google Scholar

Solman, P. (2012). As Humans and Computers Merge… Immortality? Interview with Ray Kurzweil. PBS. 2012-07-03. Available online at:http://www.pbs.org/newshour/bb/business-july-dec12-immortal_07-10/ (Retrieved November 5, 2014).

Steward, J. E. (2014). The direction of evolution: the rise of cooperative organization. Biosystems 123, 27–36. doi: 10.1016/j.biosystems.2014.05.006

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Trimper, J. B., Wolpe, P. R., and Rommelfanger, K. S. (2014). When “I” becomes “We”: ethical implications of emerging brain-to-brain interfacing technologies. Front. Neuroeng. 7:4 doi: 10.3389/fneng.2014.00004

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Veletsianos, G. (Ed.). (2010). Emerging Technologies in Distance Education. Edmonton, AB: AU Publisher.

Google Scholar

Vidal, C. (in press). “Distributing cognition: from local brains to the global brain,” in The End of the Beginning: Life, Society and Economy on the Brink of the Singularity, eds B. Goertzel and T. Goertzel.

Google Scholar

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., and Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups.Science 330, 686–688. doi: 10.1126/science.1193147

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text | Google Scholar

Keywords: global brain, complex adaptive systems, human longevity, techno-cultural society, noeme, systems neuroscience

Citation: Kyriazis M (2015) Systems neuroscience in focus: from the human brain to the global brain? Front. Syst. Neurosci. 9:7. doi: 10.3389/fnsys.2015.00007

Received: 14 October 2014; Accepted: 14 January 2015;
Published online: 06 February 2015.

Edited by:

Manuel Fernando Casanova, University of Louisville, USA

Reviewed by:

Mikhail Lebedev, Duke University, USA
Andrea Stocco, University of Washington, USA

Copyright © 2015 Kyriazis. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: drmarios@live.it


 

This article can also be found at http://hplusmagazine.com/2015/02/10/human-brain-global-brain/

The coming transhuman era: Jason Sosa at TEDxGrandRapids [Transhumanism]

Dawn of Giants Favorite…

This video from TEDx Grand Rapids is probably one of the best introductions to transhumanism. The video is called The coming transhuman era: Jason Sosa at TEDxGrandRapids. Jason Sosa is a tech entrepreneur and I think it’s pretty safe to say that we’ll be hearing more about him in the near future. This one is an absolute must see!


Runtime: 15:37

This video can also be found at https://www.youtube.com/watch?v=1Ugo2KEV2XQ


Video Info:

Published on Jun 24, 2014

Sosa is the founder and CEO of IMRSV, a computer vision and artificial intelligence company and was named one of “10 Startups to Watch in NYC” by Time Inc., and one of “25 Hot and New Startups to Watch in NYC” by Business Insider. He has been featured by Forbes, CNN, New York Times, Fast Company, Bloomberg and Business Insider, among others.

In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)