EVALITA 2007, the first edition of the initiative devoted to the evaluation of Natural Language Processing tools for Italian, provided a shared framework where participants’ systems had the possibility to be evaluated on five different tasks, namely Part of Speech Tagging (organised by the University of Bologna), Parsing (organised by the University of Torino), Word Sense Disambiguation (organised by CNR-ILC, Pisa), Temporal Expression Recognition and Normalization (organised by CELCT, Trento), and Named Entity Recognition (organised by FBK, Trento). We believe that the diffusion of shared tasks and shared evaluation practices is a crucial step towards the development of resources and tools for Natural Language Processing. Experiences of this kind, in fact, are a valuable contribution to the validation of existing models and data, allowing for consistent comparisons among approaches and among representation schemes. The good response obtained by EVALITA, both in the number of participants and in the quality of results, showed that pursuing such goals is feasible not only for English, but also for other languages.

Magnini, B., Amedeo, C., Fabio, T., Cristina, B., Alessandro, M., Vincenzo, L., F., B., N., C., Antonio, T., Bartalesi Lenzi, V., Sprugnoli, R., Speranza, M., Evaluation of Natural Language Tools for Italian: EVALITA 2007, Paper, in LREC 2008, (Marrakech, Morocco, 28-30 May 2008), LREC, Marrakech 2008: 2536-2543 [http://hdl.handle.net/10807/133020]

Evaluation of Natural Language Tools for Italian: EVALITA 2007

Sprugnoli, Rachele;
2008

Abstract

EVALITA 2007, the first edition of the initiative devoted to the evaluation of Natural Language Processing tools for Italian, provided a shared framework where participants’ systems had the possibility to be evaluated on five different tasks, namely Part of Speech Tagging (organised by the University of Bologna), Parsing (organised by the University of Torino), Word Sense Disambiguation (organised by CNR-ILC, Pisa), Temporal Expression Recognition and Normalization (organised by CELCT, Trento), and Named Entity Recognition (organised by FBK, Trento). We believe that the diffusion of shared tasks and shared evaluation practices is a crucial step towards the development of resources and tools for Natural Language Processing. Experiences of this kind, in fact, are a valuable contribution to the validation of existing models and data, allowing for consistent comparisons among approaches and among representation schemes. The good response obtained by EVALITA, both in the number of participants and in the quality of results, showed that pursuing such goals is feasible not only for English, but also for other languages.
2008
Inglese
LREC 2008
LREC 2008
Marrakech, Morocco
Paper
28-mag-2008
30-mag-2008
9782951740846
LREC
Magnini, B., Amedeo, C., Fabio, T., Cristina, B., Alessandro, M., Vincenzo, L., F., B., N., C., Antonio, T., Bartalesi Lenzi, V., Sprugnoli, R., Speranza, M., Evaluation of Natural Language Tools for Italian: EVALITA 2007, Paper, in LREC 2008, (Marrakech, Morocco, 28-30 May 2008), LREC, Marrakech 2008: 2536-2543 [http://hdl.handle.net/10807/133020]
File in questo prodotto:
File Dimensione Formato  
630_paper.pdf

accesso aperto

Tipologia file ?: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 334.48 kB
Formato Adobe PDF
334.48 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/133020
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 1
social impact