Using Visual Feature Space as a Pivot Across Languages
People can create image descriptions using thousands of languages, but these languages share only one visual space. The aim of this work is to leverage visual feature space to pass information across languages. We show that models trained to generate textual captions in more than one language conditioned on an input image can leverage their jointly trained feature space during inference to pivot across languages. We particularly demonstrate improved quality on a caption generated from an input image, by leveraging a caption in a second language. More importantly, we demonstrate that even without conditioning on any visual input, the model demonstrates to have learned implicitly to perform to some extent machine translation from one language to another in their shared visual feature space even though the multilingual captions used for training are created independently. We first experiment and show results on two bilingual image captioning datasets: Multi30k (for German-English language pair) and STAIR (for Japanese-English language pair) datasets. Then, we show results in German-Japanese language pair using data from these two datasets and Google Translate. These results pave the way for using the visual world to learn a common representation for language.
- Aidong Zhang, Committee Chair (Department of Computer Science, UVA)
- Vicente Ordóñez Román, Advisor (Department of Computer Science, UVA)
- Hongning Wang (Department of Computer Science, UVA)
- Yangfeng Ji (Department of Computer Science, UVA)