nlp:dataset_creation
Table of Contents
Dataset Creation
Annotation
For annotation tools, see Software - Annotation Tools. Annotation can often be greatly sped up by building your own annotation tool with exactly the features you want for your application. This is can be a worthwhile time investment, since a well-designed tool can speed up annotation.
To annotate data manually (without using crowdsourcing), practitioners generally follow these steps:
- Gather data: Decide the data source and gather some data to annotate. Be sure to consider any ethical issues. If you want to be able to release the data publicly, check for potential copyright or privacy violations.
- Decide what to annotate: Look at a portion of the data and decide a rough idea of what you want to annotate – that is the phenomena you want to capture and at what granularity. Write a document describing the preliminary annotation scheme.
- Pilot annotation: Try annotating some data by yourself or with some collegues using the annotation scheme. (Annotate the same data). Compare annotations and decide on edge cases (decide what to do on the difficult boardline cases). Decide if you want to simplify or extend the annotation scheme.
- Refine annotation scheme (iterative): Refine your annotation scheme until you're happy with it and it is easy to annotate. This may take several rounds of pilot annotation.
- Compute inter-annotator agreement: Make sure to doubly annotate a subset of the data you annotate so you can compute inter-annotator agreement
- Full-scale annotation: Annotate a bunch of data yourself or using annotators you've trained (usually it helps to have these annotators involved in developing the annotation scheme during the pilot annotation). Depending on the complexity of the annotation task, you may need to have regular meetings during this time to decide on edge cases as they come up.
- Updates: You can release another version of the data to annotate more data or fix errors. You can also have a bug report form (like this one) for the dataset to allow others to fix errors.
Annotation Agreement
- Measures of inter-annotator agreement
- Cohen's kappa Better than percent agreement, see slides here
- Fleiss' kappa More general than Cohen's kappa
- Software
Building Your own Annotation Tool
- For simple projects, annotation can be done in a spreadsheet
- When building your own annotation tool, here are some things to consider
- The purpose of the tool is to make the annotation faster. Think carefully about what interface will be fastest for trained annotators.
- To speed up development, use whatever language and API you are familiar with or find easiest.
- Think very carefully about ways to reduce unnecessary mouse clicks, typing, reading text, etc. Every mouse click counts. Aggressively remove anything that is unnecessary, like typing escape or enter to save. Instead, automatically save when you go to the next example, etc.
- Plan on doing some iterations on the tool. You will need to try it, and change it based on your experience.
- It doesn't need to be perfect, it just needs to be fast to use. It's ok to have bugs in the annotation tool if it's not widely used, and they don't slow down annotation.
- Don't make it full-featured. You just need the features that make annotation fast.
Dataset and Data Selection Issues
Data Validation
Crowdsourcing
See Crowdsourcing.
Alternative Methods
Methods of Faster or Cheaper Annotation
- Garrette & Baldridge 2013 - Learning a Part-of-Speech Tagger from Two Hours of Annotation Talks efficiency of annotating types vs tokens
- See also Prompting and Scao & Rush 2021 - How Many Data Points is a Prompt Worth? Prompts are very helpful in small data regimes, and are worth 100's of datapoints.
Methods of Avoiding Dataset Bias or Improving Robustness
- Adversarial Filtering
- Zellers et al 2018 - SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference Introduced adversarial filtering
Reducing Bias
Documentation
Related Pages
nlp/dataset_creation.txt · Last modified: 2023/12/10 06:18 by jmflanig