Hugo de Garis – Singularity Skepticism (Produced by Adam Ford)

This is Hugo de Garis talking about why people tend to react with a great deal of skepticism.  To address the skeptics, de Garis explains Moore’s Law and goes into it’s many implications.  Hugo de Garis makes a statement toward the end about how people will begin to come around when they begin to see their household electronics getting smarter and smarter.


Runtime: 12:31


This video can also be found here and here.

Video Info:

Published on Jul 31, 2012

Hugo de Garis speaks about why people are skeptical about the possibility of machine intelligence, and also reasons for believing machine intelligence is possible, and quite probably will be an issue that we will need to face in the coming decades.

If the brain guys can copy how the brain functions closely enough…we will arrive at a machine based on neuroscience ideas and that machine will be intelligent and conscious

 

 

Advertisements

Ben Goertzel – Beginnings [on Artificial Intelligence – Thanks to Adam A. Ford for this video.]

In this video, Ben Goertzel talks a little about how he got into AGI research and about the research, itself.  I first heard of Ben Goertzel about four years ago, right when I was first studying computer science and considering a career in AI programming.  At the time, I was trying to imagine how you would build an emotionally intelligent machine.  I really enjoyed hearing some of his ideas at the time and still do.  Also at the time, I was listening to a lot of Tony Robbins so you could imagine, I came up with some pretty interesting theories on artificial intelligence and empathetic machines.  Maybe if I get enough requests I’ll write a special post on some of those ideas.  You just let me know if you’re interested.


Runtime: 10:33


This video can also be found at here and here.

Video Info:

Published on Jul 27, 2012

Ben Goertzel talks about his early stages in thinking about AI, and two books : The Hidden Pattern, and Building Better Minds.

The interview was done in Melbourne Australia while Ben was down to speak at the Singularity Summit Australia 2011.

http://2011.singularitysummit.com.au

Interviewed, Filmed & Edited by Adam A. Ford
http://goertzel.org

 

Peter Voss Interview on Artificial General Intelligence

This is an interview with Peter Voss of Optimal talking about artificial general intelligence.  One of the things Voss talks about is the skepticism which is a common reaction when talking about creating strong AI and why (as Tony Robbins always says) the past does not equal the future.  He also talks about why he thinks that Ray Kurzweil’s predictions that AGI won’t be achieved for another 20 is wrong – (and I gotta say, he makes a good point).  If you are interested in artificial intelligence or ethics in technology then you’ll want to watch this one…  

And don’t worry, the line drawing effect at the beginning of the video only lasts a minute.


Runtime: 39:55


This video can also be found at https://www.youtube.com/watch?v=4W_vtlSjNk0

Video Info:

Published on Jan 8, 2013

Peter Voss is the founder and CEO of Adaptive A.I. Inc, an R&D company developing a high-level general intelligence (AGI) engine. He is also founder and CTO of Smart Action Company LLC, which builds and supplies AGI-based virtual contact-center agents — intelligent, automated phone operators.

Peter started his career as an entrepreneur, inventor, engineer and scientist at age 16. After several years of experience in electronics engineering, at age 25 he started a company to provide advanced custom hardware and software solutions. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked on a broad range of disciplines — cognitive science, philosophy and theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving new breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., and last year founded Smart Action Company as its commercialization division.

Peter considers himself a free-minds-and-markets Extropian, and often writes and presents on philosophical topics including rational ethics, freewill and artificial minds. He is also deeply involved with futurism and life-extension.


http://www.optimal.org/peter/peter.htm

My main occupation is research in high-level, general (domain independent, autonomous) Artificial Intelligence — “Adaptive A.I. Inc.”

I believe that integrating insights from the following areas of cognitive science are crucial for rapid progress in this field:

Philosophy/ epistemology – understanding the true nature of knowledge
Cognitive psychology (incl. developmental & psychometric) for analysis of cognition – and especially – general conceptual intelligence.
Computer science – self-modifying systems, combining new connectionist pattern manipulation techniques with ‘traditional’ AI engineering.
Anyone who shares my passion – and/ or concerns – for this field is welcome to contact me for brainstorming and possible collaboration.

My other big passion is for exploring what I call Optimal Living: Maximizing both the quantity & quality of life. I see personal responsibility and optimizing knowledge acquisition as key. Specific interests include:

Rationality, as a means for knowledge. I’m largely sympathetic to the philosophy of Objectivism, and have done quite a bit of work on developing a rational approach to (personal & social) ethics.
Health (quality): physical, financial, cognitive, and emotional (passions, meaningful relationships, appreciation of art, etc.). Psychology: IQ & EQ.
Longevity (quantity): general research, CRON (calorie restriction), cryonics
Environment: economic, social, political systems conducive to Optimal Living.
These interests logically lead to an interest in Futurism , in technology for improving life – overcoming limits to personal growth & improvement. The transhumanist philosophy of Extropianism best embodies this quest. Specific technologies that seem to hold most promise include AI, Nanotechnology, & various health & longevity approaches mentioned above.

I always enjoy meeting new people to explore ideas, and to have my views critiqued. To this end I am involved in a number of discussion groups and salons (e.g. ‘Kifune’ futurist dinner/discussion group). Along the way I’m trying to develop and learn the complex art of constructive dialog.

Interview done at SENS party LA 20th Dec 2012.

 

 

Humanity+ and the Upcoming Battle between Good and Evil by Jeanne Dietsch

This article from the humanity+ website (Humanity+ and the Upcoming Battle between Good and Evil) evaluates political stresses in light of transhumanism and the ever-nearing technological singularity.


 

Humanity+ and the Upcoming Battle between Good and Evil

obam and putin

Many transhumanists seek a better world, made possible through massively improved intellectual capacity, aka Humanity+.

Yet, though we have more power to achieve Good, we have no better understanding of Good than philosophers of millennia ago. If groups continue to gain power exponentially yet disagree on goals, the result might not be tranquility. So far, our super powers have heightened the potential for global destruction. The means to avoid war lies not in increasing the intelligence of our weaponry, but in taming the emotional, political and economic systems that feed its use. Will H+ really alter such psychological and social networks?

Will we finally be able to unite and collaborate toward a consensus goal?

Increased speed and capacity have demonstrably improved our ability to predict outcomes. Solving Texas Hold ‘em Poker is an impressive accomplishment. It suggests that once we decide on a goal, we will now be much more likely to discover the best way to achieve it, even if the path contains psychological bluffs and probability pitfalls.[i] With better speed, capacity and algorithms, our predictive and implementation powers grow.
Our goals, however, remain contentious. Each religious and philosophical in-group defines its own path to Good, Enlightenment or Heaven. To compress such variation into a single metric, some transhumanists propose sampling world populations or collecting a particularly enlightened group of religious and philanthropic leaders to create humanitarian norms that will be used to guide AGI behavior.

The latter was actually already accomplished on December 10, 1948, in response to the second World War. The drafters included Dr. Charles Malik (Lebanon), Alexandre Bogomolov (USSR), Dr. Peng-chun Chang (former Republic of China), René Cassin (France), Eleanor Roosevelt (US, Chair), Charles Dukes (United Kingdom), William Hodgson (Australia), Hernan Santa Cruz (Chile) and John P. Humphrey (Canada), with input from dozens of other representatives of nations as diverse as India and Iran.[ii]

The document is the United Nations Universal Declaration of Human Rights[iii]. Forty-eight nations with widely varying cultures signed this Declaration. However, even in the case of something so broadly accepted, even within the consensus-seeking environment following WWII, eight nations abstained from support: the Soviet Union and five affiliated nations, plus Saudi Arabia and apartheid South Africa. And, although the new People’s Republic of China joined the UN in 1971, it publicly and pointedly values economic progress over human rights, at least until it catches up to developed countries.[iv] Moreover, a number of its 1.3 billion citizens agree.

The point is that there is no coalescing consensus of what goals for humankind should be, even on something as basic as fundamental human rights. Conflict has been our past and will be our future. Some transhumanists talk about upcoming battles.

Hugo deGaris[v] expects conflict between “Terrans” who want to remain homo sapiens and “Cosmists” who expect AGI to replace humans, but how long will struggles last between those who welcome super powers and those who fight them? More likely, the long-term wars of the future will resemble those that ravage us now. Although many young educated adults believe their generation is more cosmopolitan, less nationalistic and more humanitarian, their counterparts are joining conservative, anti-immigration political movements, or even the murderous Islamic State! Do we really believe that only those with progressive Western values will control all H+’s underlying drives? And, if not, are we not arming the enemy at the same time we arm ourselves with greater intelligence?

But fear of misuse is almost never a reason not to pursue knowledge. Perhaps H+, with superior intelligence, will be able to decode the patterns of the Universe and finally explain to us why we are here. Perhaps these super beings will finally reach consensus on our goals?

The aspiration for such a superhuman race is not a recent dream. In fact, over a century ago, Nietzsche wrote, in Also Sprach Zarathustra, that the ultimate purpose of humankind was to create a being transcending human abilities, an ubermensch. While ubermensch is often translated into English as “super man”, it is actually much closer to the concept of H+. The ubermensch was a person above all weaker beings, an empiricist who gained knowledge from his senses just as H+ will gain knowledge from trillions of sensors. The ubermensch would not be constrained by religious truisms but understand Nature directly.

However, ubermensch and H+ differ in at least two ways. First, Nietzsche’s character denigrated Platonic concepts and other abstractions because he considered them removed from experience, whereas we now view conceptual hierarchies as the brain’s means to find pattern and thinking efficiently. We expect H+ to be able to abstract patterns in ways that will enable it to predict future developments far better than homo sapiens. Secondly, H+ differs from ubermensch in its attitude toward the body. Nietzsche saw the body as the essence of humankind. H+ hopes to escape it. In fact, the H+ holy grail of substrate-independent intelligence – uploading brains — very closely mirrors the Christian concept of a soul, the essence of a person that lives on after the body dies.

This other-worldly aspiration was anathema to Nietzsche at the time because it was not grounded in reality. Would he feel the same way today when physics has transformed much of the invisible to material? Perhaps not.

Regardless, is not the goal of transhumanists the creation of a new, ideal being that will understand its purpose better than we do? Are we not, in our struggle to bring meaning to our lives, setting the creation of H+ as a reason for humankind’s existence, for our own existence? In all honesty, are we really seeking something so different from what humans have sought for millennia: a reason, a cause, a goal for existence?

If so, we might also consider Nietzsche’s conclusion. Such goals are futile. Nietzsche viewed Darwinian evolution not as a march toward the ideal, but as a climb across ever-changing terrain. Nietzsche viewed creations as cyclic, or — as we might say today — fractal. From this perspective, creating an ubermensch will not lead to an idyllic existence; it will not stop our struggle; it will only transfer it to venues of a different scale: enormous gullies or minutest crevices. The only force that will stop us fighting among ourselves is a greater threat from beyond.

In fact, Nietzsche came to believe that it is the balancing of conflict with structure, chaos with art, and entropy with life that is each individual’s goal. When Maxwell’s demon opens the door and differences disappear into unchanging calmness, Life is over. Meanwhile, H+ will supersede homo sapiens, but only as one more level of being. We can evolve into ubermenschen, better suited than our hunter-gatherer-brained predecessors to live in today’s complexity, but H+ will not be perfect and will never be finished.

Our ultimate purpose will forever remain just out of sight, past the misty curve of hyperspace.

Screen Shot 2015-02-10 at 2.18.49 PM

References

[i] Bowling, Michael; Burch, Neil; Johanson, Michael; Tammelin, Oskari. (2015) Science (Washington, DC, United States) 347(6218), 145-149.[ii] The Drafters of the Universal Declaration of Human Rights. (2015) United Nations, New York, NY, US. http://www.un.org/en/documents/udhr/drafters.shtml[iii] United Nations Universal Declaration of Human Rights (1948), United Nations, New York, NY, US. http://www.un.org/en/documents/udhr/index.shtml[iv] Moore, Greg. (1999) China’s Cautious Participation in the UN Human Rights Regime, in A review of China, the United Nations, and Human Rights: The Limits of Compliance, editor, Ann Kent. Philadelphia: University of Pennsylvania Press.[v] De Garis, Hugo. (2013) “Will there be cyborgs?” Between Ape and Artilect: Conversations with Pioneers of Artificial General Intelligence and Other Transformative Technologies, editor, Ben Goertzel, Humanity+ Press, Los Angeles, CA.###

About the author

Jeanne Dietsch is a serial tech entrepreneur, Harvard graduate in sci-tech policy, group-thinking facilitator and founder of Sapiens Plurum, an advocacy organization looking out for the interests of humankind.

Jeanne Dietsch
Sapiens Plurum “The Wisdom of Many”

Blog: Saving Humankind-ness

jdietsch@post.harvard.edu


This article can also be found here.