These are my recommendations of key texts to read if you really want to get familiar with Superintelligence.
SI-1. Good, I. J. (1966). Speculations concerning the first ultraintelligent machine. In Advances in computers (Vol. 6, pp. 31-88). Elsevier.
Irving John (Jack) Good was mathematician who worked with Alan Turing and made significant contribution to braking the Enigma codes. One could regard him as Turing’s statistician. Good later worked with British AI pioneer and computer designer Donald Michie. Good devoted much of his later life to research in Bayesian statistics. Goods paper cited above was the first to clearly spell out ultraintelligent machines and can be rightly viewed as the basis of the superintelligence discipline today. This paper stated:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control
This short paragraph not only presages the idea of superintelligent AI, it also laid the groundwork for subsequent Paperclip Apocalypse scenarios and the drive for AI safety considerations. Good was particularly a credible messenger due to his early intimate and highly knowledgeable technical familiarity and experience with highly complex and capable computers.
SI-2. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. New York: Oxford University Press.
Bostrom’s book was much waited by the superintelligence (SI) community, and in some respects provided the academic sanctioning of runaway-AI potential for harm, and AI-safety, as legitimate scholarly topics for discussion. In some ways the runaway SI apocalypse scenarios act to counterbalance Ray Kurzweil’s Exponentiality of all things technological and Singularity visions.
SI-3. Drexler, K.E. (2019): Reframing Superintelligence: Comprehensive AI Services as General Intelligence, Technical Report #2019-1, Future of Humanity Institute, University of Oxford
This is a must read by Eric Drexler, pioneer of nanotechnology . This report projects a possible, if not likely, trajectory of AI development that envisions emergence of asymptotically comprehensive, superintelligent-level AI services. Drexler has been prescient regarding the importance of and trajectory of nanotechnology.
SI-4.Yampolskiy, R. V. (2015). Artificial Superintelligence: a futuristic approach. CRC Press.
While maintaining a focus on AI and superintelligence safety, Roman Yampolskiy brings additional dimensions to discussions of superintelligence. I am not quite sure why the need to use the term Artificial in the title and the discussion. Superintelligence is not now and will never be a normal or natural attribute; I view adding artificial to superintelligence as redundant.
The book includes interesting and useful discussions on topics such as AI-Completeness and AI-Hardness, Mind Design and associated taxonomies of real and speculative mind design space. Most of the intensity and depth of discussion though is focused on the harm that SI can bring (and according to the author and many of the references cited, viewed as very likely to occur.) The detailed references provided are exceptional. Personally, I would prefer to see more discussion of the positive aspects of SI and the hard problems it can and should solve first.
SI-5. Philip Larrey (2017), Would Super-Human Machine Intelligence Really Be Super-Human? in G. Dodig-Crnkovic and R. Giovagnoli (eds.), Representation and Reality in Humans, Other Living Organisms and Intelligent Machines , (Studies in Applied Philosophy, Epistemology and Rational Ethics 28, DOI 10.1007/978-3-319-43784-2_19)