Bridging the Gap between Training and Inference for Neural Machine Translation

NLPIR SEMINAR Y2019#26

INTRO

In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his/her related research with you.

Arrangement

Tomorrow’s seminar is organized as follows:

  1. The seminar time is 1.pm, Mon (August 19, 2019), at Zhongguancun Technology Park ,Building 5, 1306.
  2. Yaofei Yang is going to give a presentation, the paper’s title is Bridging the Gap between Training and Inference for Neural Machine Translation.
  3. The seminar will be hosted by Baohua Zhang.

Everyone interested in this topic is welcomed to join us.

Bridging the Gap between Training and Inference for Neural Machine Translation

Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu

Abstract

Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese!English and WMT’14 English!German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets.

You May Also Like

About the Author: nlpvv

发表回复