Learning principled bilingual mappings of word embeddings while preserving monolingual invariance

Abstract

Mapping word embeddings of different languages into a single space has multiple applications. In order to map from a source space into a target space, a common approach is to learn a linear mapping that minimizes the distances between equivalences listed in a bilingual dictionary. In this paper, we propose a framework that generalizes previous work, provides an efficient exact method to learn the optimal linear transformation and yields the best bilingual results in translation induction while preserving monolingual performance in an analogy task.

Publication
In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
Date