Is anisotropy really the cause of BERT embeddings not being semantic?

  1. Alejandro Fuster Baggetto 1
  2. Fresno Fernández, Víctor 1
  1. 1 Universidad Nacional de Educación a Distancia
    info

    Universidad Nacional de Educación a Distancia

    Madrid, España

    ROR https://ror.org/02msb5n36

Aktak:
Findings of the Association for Computational Linguistics: EMNLP

Argitalpen urtea: 2022

Orrialdeak: 4271-4281

Mota: Biltzar ekarpena

Laburpena

In this paper we conduct a set of experiments aimed to improve our understanding of the lack of semantic isometry in BERT, i.e. the lack of correspondence between the embedding and meaning spaces of its contextualized word representations. Our empirical results show that, contrary to popular belief, the anisotropy is not the root cause of the poor performance of these contextual models’ embeddings in semantic tasks. What does affect both the anisotropy and semantic isometry is a set of known biases: frequency, subword, punctuation, and case. For each one of them, we measure its magnitude and the effect of its removal, showing that these biases contribute but do not completely explain the phenomenon of anisotropy and lack of semantic isometry of these contextual language models.