Abstract | This article describes a toolset designed for general-purpose text annotation tasks. The toolset comprises a pipeline of three independent but interconnected tools, covering all steps throughout the annotation process, from data segmentation to data annotation to annotation evaluation. These tools were primarily built to address three main issues found in current general-purpose annotation tools: (i) the potential confounding of variables, as a result of having annotators do both segmentation and annotation in a single step; (ii) the cognitive load imposed to annotators; and (iii) the difficulty in comparing one study to others when different agreement indexes are reported. Within this toolset, these issues are dealt with by (i) having different tools perform data segmentation and annotation separately; (ii) giving researchers a tool where they can define and generate ad hoc tools, tailored to specific annotation schemes, to be distributed amongst annotators; and (iii) furnishing a way to calculate inter-annotator agreement according to six different indexes. Given its modularity, the toolset can be used both as a pipeline, whereby the output of one tool can be input to the next one, and for specific subtasks. It then lends itself, among other things, to the quick prototyping and testing of annotation schemes, to the execution of full annotation experiments and efforts, and to the study of how different agreement indexes behave on the same data. |