This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, ie, English and Italian. The first experiment, launched on the CrowdFlower platform, was aimed at classifying temporal relations given target entities. The second one, relying on the CrowdTruth metric, consisted in two subtasks: one devoted to the recognition of events and temporal expressions and one to the detection and classification of temporal relations. The outcomes of the experiments suggest a valuable use of crowdsourcing annotations also for a complex task like Temporal Processing.
Caselli, T., Sprugnoli, R., Inel, O., Temporal Information Annotation: Crowd vs. Experts, in Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), (Portorož, Slovenia, 23-28 May 2016), LREC, Portorož, Slovenia 2016: 3502-3609 [http://hdl.handle.net/10807/132949]
Temporal Information Annotation: Crowd vs. Experts
Sprugnoli, RacheleSecondo
;
2016
Abstract
This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, ie, English and Italian. The first experiment, launched on the CrowdFlower platform, was aimed at classifying temporal relations given target entities. The second one, relying on the CrowdTruth metric, consisted in two subtasks: one devoted to the recognition of events and temporal expressions and one to the detection and classification of temporal relations. The outcomes of the experiments suggest a valuable use of crowdsourcing annotations also for a complex task like Temporal Processing.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.