In this paper, we describe how we unearthed some fundamental problems while building an analogy dataset modelled on BATS (Gladkova et al., 2016) to evaluate historical Irish embeddings on their ability to detect orthographic, morphological and semantic similarity.The performance of our models in the analogy task was extremely poor regardless of the architecture, hyperparameters and evaluation metrics, while the qualitative evaluation revealed positive tendencies. We argue that low agreement between field experts on fundamental lexical and orthographic issues, and the lack of a unified editorial standard in available resources make it impossible to build reliable evaluation datasets for computational models and obtain interpretable results. We emphasise the need for such a standard, particularly for NLP applications, and prompt Celticists and historical linguists to engage in further discussion. We would also like to draw NLP scholars{'} attention to the role of data and its (extra)linguistic properties in testing new models, technologies and evaluation scenarios.
Dereza, O., Fransen, T., Mccrae, J. P., Do not Trust the Experts: How the Lack of Standard Complicates NLP for Historical Irish, in The Fourth Workshop on Insights from Negative Results in NLP, (Dubrovnik, 05-05 May 2023), Association for Computational Linguistics, Dubrovnik 2023: 82-87 [https://hdl.handle.net/10807/270157]
Do not Trust the Experts: How the Lack of Standard Complicates NLP for Historical Irish
Fransen, Theodorus;
2023
Abstract
In this paper, we describe how we unearthed some fundamental problems while building an analogy dataset modelled on BATS (Gladkova et al., 2016) to evaluate historical Irish embeddings on their ability to detect orthographic, morphological and semantic similarity.The performance of our models in the analogy task was extremely poor regardless of the architecture, hyperparameters and evaluation metrics, while the qualitative evaluation revealed positive tendencies. We argue that low agreement between field experts on fundamental lexical and orthographic issues, and the lack of a unified editorial standard in available resources make it impossible to build reliable evaluation datasets for computational models and obtain interpretable results. We emphasise the need for such a standard, particularly for NLP applications, and prompt Celticists and historical linguists to engage in further discussion. We would also like to draw NLP scholars{'} attention to the role of data and its (extra)linguistic properties in testing new models, technologies and evaluation scenarios.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.