Concerning Superintelligence

These are my recommendations of key texts to read  if you really want to get familiar with   Superintelligence. 

SI-1. Good, I. J. (1966). Speculations concerning the first ultraintelligent machine. In Advances in computers (Vol. 6, pp. 31-88). Elsevier.

Irving John (Jack) Good was mathematician who worked with Alan Turing and made significant contribution to braking the Enigma codes. One could regard him as Turing’s statistician. Good later worked with British AI pioneer and computer designer Donald Michie. Good devoted much of his later life to research in Bayesian statistics. Goods paper cited above was the first to clearly spell out ultraintelligent machines and can be rightly viewed as the basis of the superintelligence discipline today. This paper stated:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control

This short paragraph not only presages the idea of superintelligent AI, it also laid the groundwork for subsequent Paperclip Apocalypse scenarios and the drive for AI safety considerations. Good was particularly a credible messenger due to his early intimate and highly knowledgeable technical familiarity and experience with highly complex and capable computers.

SI-2. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. New York: Oxford University Press.

Bostrom’s book was much waited by the superintelligence (SI) community, and in some respects provided the academic sanctioning of runaway-AI potential for harm, and AI-safety, as legitimate scholarly topics for discussion. In some ways the runaway SI apocalypse scenarios act to counterbalance Ray Kurzweil’s Exponentiality of all things technological and Singularity visions.

SI-3. Drexler, K.E. (2019): Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Technical Report #2019-1, Future of Humanity Institute, University of Oxford

This is a must read by Eric Drexler, pioneer of nanotechnology . This report projects a possible, if not likely, trajectory of AI development that envisions emergence of asymptotically comprehensive, superintelligent-level AI services. Drexler has been prescient regarding the importance of and trajectory of nanotechnology.

SI-4.Yampolskiy, R. V. (2015). Artificial Superintelligence: a futuristic approach. CRC Press.

While maintaining a focus on AI and superintelligence safety, Roman Yampolskiy brings additional dimensions to discussions of superintelligence. I am not quite sure why the need to use the term Artificial in the title and the discussion. Superintelligence is not now and will never be a normal or natural attribute; I view adding artificial to superintelligence as redundant.

The book includes interesting and useful discussions on topics such as AI-Completeness and AI-Hardness, Mind Design and associated taxonomies of real and speculative mind design space. Most of the intensity and depth of discussion though is focused on the harm that SI can bring (and according to the author and many of the references cited, viewed as very likely to occur.) The detailed references provided are exceptional. Personally, I would prefer to see more discussion of the positive aspects of SI and the hard problems it can and should solve first.

SI-5. Philip Larrey (2017), Would Super-Human Machine Intelligence Really Be Super-Human? in G. Dodig-Crnkovic and R. Giovagnoli (eds.), Representation and Reality in Humans, Other Living Organisms and Intelligent Machines , (Studies in Applied Philosophy, Epistemology and Rational Ethics 28, DOI 10.1007/978-3-319-43784-2_19)

Stanley & Lehman – Why Greatness Cannot Be Planned

Fascinating insights by Computer Science / Artificial Intelligence profs …

https://amzn.to/2DlhLnX

some have summarized their insights by writing: “only by doing activities that fulfill our curiosity without any pre-defined objectives, true creativity can be unleashed. They call this the ‘Myth of the Objective’: Objectives are well and good when they are sufficiently modest … In fact, objectives actually become obstacles towards more exciting achievements, like those involving discovery, creativity, invention, or innovation—or even achieving true happiness… the truest path to “blue sky” discovery or to fulfill boundless ambition, is to have no objective at all.”

some of Stanley’s and Lehmans insights:

 

  • “The flash of insight is seeing the bridge to the next stepping stone by building from the old ones. ”

 

  • “[Picbreeder] is just one example of a fascinating class of phenomena that we might call non-objective search processes, or perhaps stepping stone collectors. The prolific creativity of these kinds of processes is difficult to overstate”

 

  • “ measuring success against the objective is likely to lead you on the wrong path in all sorts of situations”

 

  • “You can’t evolve intelligence in a Petri dish based on measuring intelligence. You can’t build a computer simply through determination and intellect—you need the stepping stones. ”

 

  • “ambitious objectives are the interesting ones, and the idea that the best way to achieve them is by ignoring them flies in the face of common intuition and conventional wisdom. More deeply it suggests that something is wrong at the heart of search. ”

 

 

I find their books inspiring and insightful. Reframing questions and providing different lines of attack on AI and Search Optimization to Ambitious Goals …

 

 

 

 

Demis Hassabis talk on General Artificial Intelligence

Demis Hassabis: Towards General Artificial Intelligence – talk at Center for Brains, Minds and Machines (CBMM). [Background: r. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world’s leading General Artificial Intelligence (AI) company, which was acquired by Google in 2014 in their largest ever European acquisition.

The talk  draws  on Demis’  eclectic experiences as an AI researcher, neuroscientist and video games designer.

see also.

deep learning drizzle

u/kmario23 over at reddit points to a wonderful new resource m the deep learning drizzle. [on Github]

I have collected a list of freely available courses on Machine Learning, Deep Learning, Reinforcement Learning, Natural Language Processing, Computer Vision, Probabilistic Graphical Models, Machine Learning Fundamentals, and Deep Learning boot camps or summer schools.

So I checked it and immediately got involved watching Ian Goodfellow …

Ian and his advisor wrote this book …. take a look at it.

Goodfellow posted pdfs of his talks here

https://www.reddit.com/r/MachineLearning/  is worth following

artificial intelligence, natural stupidity.

according to popular legends and urban myths … Amos Tversky is said to have said …

My colleagues, they study artificial intelligence; me, I study natural stupidity.

this, from CoEvolving Innovations which seems like a fascinating resource.

The blog entry there talks about Daniel Kahneman  and Amos Tversky.

The topic is fascinating.  The question of how intelligence and stupidity are related is fascinating.

There’s also a reference to Daniel Kahneman, Paul Slovic, and Amos Tversky book Judgment under uncertainty: Heuristics and biases,  that I now feel compelled to investigate

interesting factoid …Kahneman  was awarded the 2002 Nobel Prize in economic sciences  despite being a psychologist, not an economist.  Which goes to show you … that Forrest Gump’s Mom was right  Life is like a box of chocolates. You never know what you’re gonna get.”

Deep Learning on My Mind

OK, so I started perusing Terry  Sejnowski’s   recent book,  The Deep Learning Revolution.  It’s dedicated to Bo and Sol, Theresa, and Joseph and is In memory of Solomon Golomb.  Nice!

  • It’s a great book. In the short time I spend with it,  I learned quite a lot. I decided to see what’s most important to Terry looking at the topics he spends most of his time on.  Here’s what pops out first …neural networks and deep learning . [To be expected], then the items getting most discussion are:
  • the brain
  • machine learning
  • learning algorithm
  • artificial intelligence
  • the world
  • visual cortex
  • the network
  • boltzmann machine
  • the cortex
  • Geoffrey Hinton [looks like Geoff is really getting attention and kudos from everyone!!]
  • network models
  • the future
  • learning
  • self driving car
  • learning networks
  • cost function
  • deep learning networks
  • hopfield net
  • primary visual cortex
  • the visual cortex
  • independent component analysis
  • real world
  • brains
  • the internet
  • the perceptron
  • facial expressions
  • reinforcement learning
  • Francis Crick
  • hidden units
  • the retina
  • information processing systems
  • neural information processing
  • neural information processing systems
  • td gammon
  • the boltzmann machine
  • computer vision
  • driving cars
  • simple cells
  • the hopfield net
  • cerebral cortex
  • David Hubel

Somewhere further down the list I came across Soumith Chintala over at FaceBook AI / Courant Institute.  His was a new name for me. Looks like he’s a PyTorch maven, super-coder. Nice! his Wasserstein Generative Adversarial Network (GAN) paper seems pretty nice.  Apparently FAIR has advanced the ball a lot with Generative Adversarial Networks. I need to  be paying much more attention.  Also noted a new name to follow, Cade Metz  who writes about  technology for The New York Times/

All this from my first glance at The Deep Learning Revolution.  

read it … I will get deeper into the deep learning as well.

Happy Holidays …

 

 

 

ExaIntelligence – Coming Up Soon.

At Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., not too long ago:  DOE announced “Aurora” supercomputer is on track to be the United States’ first exascale system. It will be built by Intel and Cray for Argonne National Laboratory,   delivery date has shifted from 2018 to 2021 and target capability has been expanded from 180 petaflops to 1,000 petaflops (1 exaflop).

Wow! One can only speculate about what this means for Artificial and Advanced Intelligence (AI/AI) and the progression to the Singularity.  ExaIntelligence Arriving.