CIFAR fellows, advisor sum up the state of deep learning in Nature
Deep learning has vastly enhanced computer recognition of speech, visuals and objects, and has become an important tool for genomics research, CIFAR researchers wrote in Nature.
CIFAR Senior Fellows Yann LeCun (Facebook) and Yoshua Bengio (Université de Montréal), and Distinguished Fellow and Advisor Geoffrey Hinton (Google, University of Toronto), published a review of the state of deep learning research in the journal May 28. They described the evolution of supervised learning, which trains computers to recognize inputs that are labelled, and the introduction of unsupervised learning through feedforward networks, the first type of artificial neural networks.
The authors credit CIFAR with supporting this line of research at a time when many were skeptical of its promise.
“Interest in deep feedforward networks was revived around 2006 by a group of researchers brought together by the Canadian Institute for Advanced Research (CIFAR),” they write.
Wired magazine interviewed LeCun about progress in the field, focusing on the movement toward applying deep learning to objects such as robots and self-driving cars. Bloomberg News also wrote about efforts central to the CIFAR program in Learning in Machines & Brains (formerly known as Neural Computation & Adaptive Perception) to build artificial intelligence inspired by the human brain.
Image above: Yann LeCun, an NYU artificial intelligence researcher who now works for Facebook. Photo: Josh Valcarcel/WIRED
“There are three people on the planet credited with being the founding fathers of deep learning, one of the most...
Imagine shaping VR/AR experiences with your body, eyes, and even mind. Imagine improving your health and advancing neuroscience while you...