In this paper, we describe how we unearthed some fundamental problems while building an analogy dataset modelled on BATS (Gladkova et al., 2016) to evaluate historical Irish embeddings on their ability to detect orthographic, morphological and semantic similarity.The performance of our models in the analogy task was extremely poor regardless of the architecture, hyperparameters and evaluation metrics, while the qualitative evaluation revealed positive tendencies. We argue that low agreement between field experts on fundamental lexical and orthographic issues, and the lack of a unified editorial standard in available resources make it impossible to build reliable evaluation datasets for computational models and obtain interpretable results. We emphasise the need for such a standard, particularly for NLP applications, and prompt Celticists and historical linguists to engage in further discussion. We would also like to draw NLP scholars{'} attention to the role of data and its (extra)linguistic properties in testing new models, technologies and evaluation scenarios.

Dereza, O., Fransen, T., Mccrae, J. P., Do not Trust the Experts: How the Lack of Standard Complicates NLP for Historical Irish, in The Fourth Workshop on Insights from Negative Results in NLP, (Dubrovnik, 05-05 May 2023), Association for Computational Linguistics, Dubrovnik 2023: 82-87 [https://hdl.handle.net/10807/270157]

Do not Trust the Experts: How the Lack of Standard Complicates NLP for Historical Irish

Fransen, Theodorus;
2023

Abstract

In this paper, we describe how we unearthed some fundamental problems while building an analogy dataset modelled on BATS (Gladkova et al., 2016) to evaluate historical Irish embeddings on their ability to detect orthographic, morphological and semantic similarity.The performance of our models in the analogy task was extremely poor regardless of the architecture, hyperparameters and evaluation metrics, while the qualitative evaluation revealed positive tendencies. We argue that low agreement between field experts on fundamental lexical and orthographic issues, and the lack of a unified editorial standard in available resources make it impossible to build reliable evaluation datasets for computational models and obtain interpretable results. We emphasise the need for such a standard, particularly for NLP applications, and prompt Celticists and historical linguists to engage in further discussion. We would also like to draw NLP scholars{'} attention to the role of data and its (extra)linguistic properties in testing new models, technologies and evaluation scenarios.
2023
Inglese
The Fourth Workshop on Insights from Negative Results in NLP
Fourth Workshop on Insights from Negative Results in NLP
Dubrovnik
5-mag-2023
5-mag-2023
978-1-959429-49-4
Association for Computational Linguistics
Dereza, O., Fransen, T., Mccrae, J. P., Do not Trust the Experts: How the Lack of Standard Complicates NLP for Historical Irish, in The Fourth Workshop on Insights from Negative Results in NLP, (Dubrovnik, 05-05 May 2023), Association for Computational Linguistics, Dubrovnik 2023: 82-87 [https://hdl.handle.net/10807/270157]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/270157
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact