Kyunghyun Cho

LMB_KyunghyunCho

Appointment

  • CIFAR Azrieli Global Scholars
  • Learning in Machines & Brains

Institution

  • New York University
Courant Institute, Computer Science Department and Center for Data Science

Country

  • United States

Education

DSc (Information and Computer Science), Aalto University
MSc (Information and Computer Science), Aalto University
Postdoctoral Fellow, University of Montreal

About

Kyunghyun Cho aims to build an intelligent machine that communicates with humans, actively seeks knowledge and creates new knowledge.

Cho and his collaborators have extensively investigated natural language processing and machine translation. This has resulted in an attention mechanism for artificial neural networks and a new paradigm in machine translation, called ‘neural machine translation.’ It has not only advanced research, but has also been adopted by industry to produce better machine translation systems.

Recently, Cho has started to go beyond human languages to study emergent communication among machines, equipping machines to exchange information as efficiently and effectively as possible to solve challenging problems that require co-ordination among multiple intelligent agents. He believes this communication will not only help machines to solve challenging problems themselves, but will allow them to help humans in expanding our horizon of knowledge.

Awards

Google Research Award, 2017 and 2016

Relevant Publications

Cho, K. et al. "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation." EMNLP, 2014.

Bahdanau, D., K. Cho, and Y. Bengio. "Neural machine translation by jointly learning to align and translate." ICLR, 2015.

Xu, K. et al. "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention." ICML, 2015.

Firat, O., K. Cho, and Y. Bengio. "Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism." NAACL, 2016.

Lee, J., K. Cho, and T. Hofmann. "Fully Character-Level Neural Machine Translation without Explicit Segmentation." TACL, 2017.

Connect

Website