
Slides Chrstian Gamrat, ESSDERC 2012
Slides Chrstian Gamrat, ESSDERC 2012
Slides Chrstian Gamrat, ESSDERC 2012
Much AI/ML progress in the last decade can be attributed to better hardware and more data
Good old online backpropagation for plain multilayer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark. All we need to achieve this best result so far are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning.
Ciresan et al. 2010
See INM-6 Tutorial
Distributed Organization of the Primate Visual Cortex
Felleman and Van Essen, Cerebral cortex, 1991
Pyramidal cells (Neurons) in the Cortex
Ramon y Cajal, 1911
See INM-6 Tutorial
Slides Chrstian Gamrat, ESSDERC 2012
Slide by Tobi Delbruck, 2007
Slide by Tobi Delbruck, 2007

The constraints on our analog silicon systems are similar to those on neural systems: wire is limited, power is precious, robustness and reliability are essential.
Mead, 1989
Chicca, Stefanini, and Indiveri, 2013
Deiss et al. 1993