Download as iCal file

Integrating Semantics into Neural Machine Translation

By Leshem Choshen
Location Bloomfield 152, Faculty of Industrial Engineering and Management
Academic Program: Please choose
Wednesday 01 January 2020, 10:30 - 11:30
The talk would discuss several works that aim for better use of linguistic knowledge for textual generation tasks. The first part of the talk would focus on the biases that arise from our use of references in multiple references tasks and one approach to circumvent them using semantic evaluation.
The second part of the talk would discuss machine translation. The focus here would be automatic creation of challenge sets (unit tests for neural networks) which we use to test translation of long-distance dependencies. Given the time, we will discuss our results on using reinforcement learning for machine translation, stating MRT lacks REINFORCE's guarantees, and much of the improvement seen does not come from the reasons we expect them to (e.g. optimizing a constant reward of 1 is not worse than optimizing reward functions).
Relevant papers:
  • Leshem Choshen & Omri Abend (ACL 2018) Inherent Biases in Reference-based Evaluation for Grammatical Error Correction and Text Simplification

  • Leshem Choshen & Omri Abend (ACL 2018) Automatic Metric Validation for Grammatical Error Correction

  • Leshem Choshen & Omri Abend (NAACL-HLT 2018) Reference-less Measure of Faithfulness for Grammatical Error Correction

  • Leshem Choshen, Omri Abend (CoNLL 2019) Automatically Extracting Challenge Sets for Non-local Phenomena in Neural Machine Translation

  • Leshem Choshen, Lior Fox, Zohar Aizenbud, Omri Abend (ArXiV 2019) On the Weaknesses of Reinforcement Learning for Neural Machine Translation