Skip to Main content Skip to Navigation
Conference papers

Imputing out-of-vocabulary embeddings with LOVE makes language models robust with little cost

Abstract : State-of-the-art NLP systems represent inputs with word embeddings, but these are brittle when faced with Out-of-Vocabulary (OOV) words. To address this issue, we follow the principle of mimick-like models to generate vectors for unseen words, by learning the behavior of pre-trained embeddings using only the surface form of words. We present a simple contrastive learning framework, LOVE, which extends the word representation of an existing pre-trained language model (such as BERT), and makes it robust to OOV with few additional parameters. Extensive evaluations demonstrate that our lightweight model achieves similar or even better performances than prior competitors, both on original datasets and on corrupted variants. Moreover, it can be used in a plug-and-play fashion with FastText and BERT, where it significantly improves their robustness.
Document type :
Conference papers
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03613101
Contributor : Lihu Chen Connect in order to contact the contributor
Submitted on : Saturday, March 19, 2022 - 10:52:09 AM
Last modification on : Thursday, March 24, 2022 - 11:22:13 AM

File

Imputing OOV Embeddings with L...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03613101, version 2

Citation

Lihu Chen, Gaël Varoquaux, Fabian Suchanek. Imputing out-of-vocabulary embeddings with LOVE makes language models robust with little cost. ACL 2022 - 60th Annual Meeting of the Association for Computational Linguistics, May 2022, Dublin, Ireland. ⟨hal-03613101⟩

Share

Metrics

Record views

163

Files downloads

21