Adversarial Representation Learning for Text-to-Image Matching

NLPIR SEMINAR Y2019#29

INTRO

In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his/her related research with you.

Arrangement

Tomorrow’s seminar is organized as follows:

  1. The seminar time is 1.pm, Mon (September 9, 2019), at Zhongguancun Technology Park ,Building 5, 1306.
  2. Ziyu Liu is going to give a presentation, the paper’s title is Adversarial Representation Learning for Text-to-Image Matching.
  3. The seminar will be hosted by Changhe Li.

Everyone interested in this topic is welcomed to join us.
The following is the abstract of the paper.

Adversarial Representation Learning for Text-to-Image Matching

Nikolaos Sarafianos, Xiang Xu, Ioannis A. Kakadiaris

Abstract

For many computer vision applications such as image captioning, visual question answering, and person search, learning discriminative feature representations at both image and text level is an essential yet challenging problem. Its challenges originate from the large word variance in the text domain as well as the difficulty of accurately measuring the distance between the features of the two modalities. Most prior work focuses on the latter challenge, by introducing loss functions that help the network learn better feature representations but fail to account for the complexity of the textual input. With that in mind, we introduce TIMAM: a Text-Image Modality Adversarial Matching approach that learns modality-invariant feature representations using adversarial and cross-modal matching objectives. In addition, we demonstrate that BERT, a publiclyavailable language model that extracts word embeddings, can successfully be applied in the text-to-image matching domain. The proposed approach achieves state-of-theart cross-modal matching performance on four widely-used publicly-available datasets resulting in absolute improvements ranging from 2% to 5% in terms of rank-1 accuracy.

You May Also Like

About the Author: nlpvv

发表回复