David J. Fleet Computer scientist
David J. Fleet’s research interests include computer vision, machine learning, image processing, visual perception, and visual neuroscience. He is interested in how animals see, and how we can develop machines with similar or better visual capabilities. Most of his specific research has focused on mathematical foundations and algorithms for visual motion analysis, tracking, human pose and motion estimation, latent variable models, physics-based models of human motion and scene interactions, data structures for indexing and search on massive image corpora, and models of biological motion perception and stereopsis.
Koenderink Prize, 2010.
Best Paper Award, British Machine Vision Conference (BMVC), 2009.
Best Paper Award, ACM Symposium on User Interface Software and Technology (UIST), 2003.
Marr Prize Honorary Mention, 1999.
Alfred P. Sloan Research Fellowship, 1996.
M.A. Brubaker et al, "Building proteins in a day: Efficient 3D molecular reconstruction," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 2015.
M. Norouzi et al, "Fast exact search in Hamming space with multi-index hashing," IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 6, pp. 1107-1119, 2014.
M. Norouzi, "Hamming distance metric learning," Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, 2012.
M. de La Gorce et al, "Model-based 3D hand pose estimation from monocular video," IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 9, pp. 1793-1805, 2011.
G.W. Taylor et al, "Dynamical binary latent variable models for 3D human pose tracking," IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, 2010.
Senior Fellow Learning in Machines & Brains
University of TorontoComputer Science Department
PhD (Computer Science) University of Toronto
MS (Computer Science) University of Toronto
BSc (Honours Computer Science and Mathematics) Queen's University
Ideas Related to David J. Fleet
Artificial vision systems are enabling computers to interpret images and video with increasing efficacy. They have untold potential to change...