OK, so I started perusing Terry Sejnowski’s recent book, The Deep Learning Revolution. It’s dedicated to Bo and Sol, Theresa, and Joseph and is In memory of Solomon Golomb. Nice!
- It’s a great book. In the short time I spend with it, I learned quite a lot. I decided to see what’s most important to Terry looking at the topics he spends most of his time on. Here’s what pops out first …neural networks and deep learning . [To be expected], then the items getting most discussion are:
- the brain
- machine learning
- learning algorithm
- artificial intelligence
- the world
- visual cortex
- the network
- boltzmann machine
- the cortex
- Geoffrey Hinton [looks like Geoff is really getting attention and kudos from everyone!!]
- network models
- the future
- learning
- self driving car
- learning networks
- cost function
- deep learning networks
- hopfield net
- primary visual cortex
- the visual cortex
- independent component analysis
- real world
- brains
- the internet
- the perceptron
- facial expressions
- reinforcement learning
- Francis Crick
- hidden units
- the retina
- information processing systems
- neural information processing
- neural information processing systems
- td gammon
- the boltzmann machine
- computer vision
- driving cars
- simple cells
- the hopfield net
- cerebral cortex
- David Hubel
Somewhere further down the list I came across Soumith Chintala over at FaceBook AI / Courant Institute. His was a new name for me. Looks like he’s a PyTorch maven, super-coder. Nice! his Wasserstein Generative Adversarial Network (GAN) paper seems pretty nice. Apparently FAIR has advanced the ball a lot with Generative Adversarial Networks. I need to be paying much more attention. Also noted a new name to follow, Cade Metz who writes about technology for The New York Times/
All this from my first glance at The Deep Learning Revolution.
read it … I will get deeper into the deep learning as well.
Happy Holidays …