User Tools

Site Tools


nlp:tokenization

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:tokenization [2024/03/20 03:52] jmflanignlp:tokenization [2024/07/12 03:28] (current) – [Miscellaneous Papers about Tokenization] jmflanig
Line 33: Line 33:
   * BPE Dropout [[https://arxiv.org/pdf/1910.13267.pdf|Provilkov et al 2020 - BPE-Dropout: Simple and Effective Subword Regularization]] Stochastically corrupts the segmentation procedure of BPE, which leads to multiple segmentations and prevents overfitting to the segmentation procedure. Up to 3 BLEU point improvement over BPE and 0.9 BLEU compared to previous subword regularization methods.   * BPE Dropout [[https://arxiv.org/pdf/1910.13267.pdf|Provilkov et al 2020 - BPE-Dropout: Simple and Effective Subword Regularization]] Stochastically corrupts the segmentation procedure of BPE, which leads to multiple segmentations and prevents overfitting to the segmentation procedure. Up to 3 BLEU point improvement over BPE and 0.9 BLEU compared to previous subword regularization methods.
   * Gradient-based Subword Tokenization: **[[https://arxiv.org/pdf/2106.12672.pdf|Tay et al 2021 - Charformer: Fast Character Transformers via Gradient-based Subword Tokenization]]**   * Gradient-based Subword Tokenization: **[[https://arxiv.org/pdf/2106.12672.pdf|Tay et al 2021 - Charformer: Fast Character Transformers via Gradient-based Subword Tokenization]]**
 +
 +==== Effects and Choice of Tokenization ====
 +  * [[https://arxiv.org/pdf/2310.08754|Ali et al 2024 - Tokenizer Choice For LLM Training: Negligible or Crucial?]] HuggingFace implementation of BPE seems to be suboptimal, see table 3.
  
 ===== Miscellaneous Papers about Tokenization ===== ===== Miscellaneous Papers about Tokenization =====
   * [[https://aclanthology.org/2024.eacl-long.72.pdf|Christopoulou et al 2024 - Text-to-Code Generation with Modality-relative Pre-training]] Adds domain-specific tokens to a pretrained LM for text-to-code   * [[https://aclanthology.org/2024.eacl-long.72.pdf|Christopoulou et al 2024 - Text-to-Code Generation with Modality-relative Pre-training]] Adds domain-specific tokens to a pretrained LM for text-to-code
 +  * [[https://arxiv.org/pdf/2406.20086|Feucht et al 2024 - Token Erasure as a Footprint of Implicit Vocabulary Items in LLMs]] Finds the implicit vocabulary in a Transformer decoder model
  
 ===== Stopwords ===== ===== Stopwords =====
Line 48: Line 52:
   * SentencePiece (does BPE and subword regularization): [[https://github.com/google/sentencepiece]] You don't need to run the Moses tokenizer first (see the [[https://aclanthology.org/D18-2012.pdf|paper]])   * SentencePiece (does BPE and subword regularization): [[https://github.com/google/sentencepiece]] You don't need to run the Moses tokenizer first (see the [[https://aclanthology.org/D18-2012.pdf|paper]])
   * [[https://github.com/glample/fastBPE|fastBPE]]: A fast C++ implementation of BPE   * [[https://github.com/glample/fastBPE|fastBPE]]: A fast C++ implementation of BPE
 +
 +===== Related Pages =====
 +  * [[Data Preparation]]
  
nlp/tokenization.1710906724.txt.gz · Last modified: 2024/03/20 03:52 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki