“The occupational activities of children are learning, thinking, playing and the like. Yet we tell them nothing about those things.” per AI Pioneer Seymour Papert – In Pam McCorduck’s Machines who Think, (an outstanding book; Pam is a great author, turns out she’s the wife of Joe Traub who was Computer Science Dept Chair at Carnegie Mellon University & Columbia University … and had amazing insight into the real story 🙂 – not found elsewhere ) https://amzn.to/2FwGmIu
EXCELLENT EXCELLENT BOOK … It’s really packed with amazing insights and details hidden from the public view …
I didn’t realize Papert’s connection with Piaget and his deep understanding and interest in how children learn. Of course Papert and Minsky’s Perceptrons were widely known [ and got a refresh boost . The Perceptron. ideas… which, in prehistoric times, with Marvin Minsky, helped pave the way to the AI we know today. — that’s where the real action was and maybe still is … check the reboot. over at https://amzn.to/2TNjok7
One of the main categories of discussion in this book is that of worthwhile tasks for AI. I will devote some time to stating some of the recognized questions, problems, and tasks. I will also mention some notable AI accomplishments and highlight a few of the recognized scholarly achievements. Another topic for discussion is the classification of Intelligences. What is Natural Intelligence? What is Artificial General Intelligence? What is Superintelligence? What about human measures such as IQ? G? What does the AlphaZero algorithm beating the best human players in Chess, Go and Shogi mean? Can the Paperclip Apocalypse really happen?
All these and more … coming soon …
OK, so I started perusing Terry Sejnowski’s recent book, The Deep Learning Revolution. It’s dedicated to Bo and Sol, Theresa, and Joseph and is In memory of Solomon Golomb. Nice!
- It’s a great book. In the short time I spend with it, I learned quite a lot. I decided to see what’s most important to Terry looking at the topics he spends most of his time on. Here’s what pops out first …neural networks and deep learning . [To be expected], then the items getting most discussion are:
- the brain
- machine learning
- learning algorithm
- artificial intelligence
- the world
- visual cortex
- the network
- boltzmann machine
- the cortex
- Geoffrey Hinton [looks like Geoff is really getting attention and kudos from everyone!!]
- network models
- the future
- self driving car
- learning networks
- cost function
- deep learning networks
- hopfield net
- primary visual cortex
- the visual cortex
- independent component analysis
- real world
- the internet
- the perceptron
- facial expressions
- reinforcement learning
- Francis Crick
- hidden units
- the retina
- information processing systems
- neural information processing
- neural information processing systems
- td gammon
- the boltzmann machine
- computer vision
- driving cars
- simple cells
- the hopfield net
- cerebral cortex
- David Hubel
Somewhere further down the list I came across Soumith Chintala over at FaceBook AI / Courant Institute. His was a new name for me. Looks like he’s a PyTorch maven, super-coder. Nice! his Wasserstein Generative Adversarial Network (GAN) paper seems pretty nice. Apparently FAIR has advanced the ball a lot with Generative Adversarial Networks. I need to be paying much more attention. Also noted a new name to follow, Cade Metz who writes about technology for The New York Times/
All this from my first glance at The Deep Learning Revolution.
read it … I will get deeper into the deep learning as well.
Happy Holidays …
So much has happened in the past year. Somehow lost track of some of the time. In the meanwhile cosmology is flourishing with speculations of many if not infinities of possible parallel universes. maybe they’re in our collective imagination. Maybe they are an artifact of the immaturity of our cognitive abilities. While we’ve done much, and progressed to amazing heights, it would appear that we’re still far away from a comprehensive perspective of the universe.
I’ve been researching Babbage recently, and came across his Ninth Bridgewater Treatise . There are some profound insights and speculations there regarding the nature of our understanding of complex and rare events. Given Charlie’s contribution to establishing feasibility of ‘extreme computing’, one must pay attention to his other works. The NBT can shed some light on the multiple universe puzzle.
Anything To Triples (any23) is a library, a web service and a command line tool that extracts structured data in RDF format from a variety of Web documents.
At Advanced Scientific Computing Advisory Committee (ASCAC) meeting, in Arlington, Va., not too long ago: DOE announced “Aurora” supercomputer is on track to be the United States’ first exascale system. It will be built by Intel and Cray for Argonne National Laboratory, delivery date has shifted from 2018 to 2021 and target capability has been expanded from 180 petaflops to 1,000 petaflops (1 exaflop).
Wow! One can only speculate about what this means for Artificial and Advanced Intelligence (AI/AI) and the progression to the Singularity. ExaIntelligence Arriving.
To get Artificial and Advanced Intelligences AI/AI into more serious realms, one needs to consider the foundational Reality as composed of Triple Realities: Objective, Subjective, and Intersubjective.
For the time being … a good place to start is here :
Vladimir Alexandrovich Voevodsky Winner of the 2002 Fields Medal passed away earlier this year. Way too early. Explore his UNIVALENT FOUNDATIONS (2014), where he stated:
I think it was at this moment that I largely stopped doing what is called “curiosity-driven research” and started to think seriously about the future. … It soon became clear that the only real long-term solution to the problems that I encountered is to start using computers in the verification of mathematical reasoning.
Some of the folks who mattered … the Brightest Candles of all
We’re on the path to some seriously extreme computing. One hopes that this will be put to good, strike that, great use.