nlp:attention_mechanisms
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| nlp:attention_mechanisms [2023/07/30 18:28] – [Key Papers] jmflanig | nlp:attention_mechanisms [2025/04/04 23:53] (current) – [Key Papers] jmflanig | ||
|---|---|---|---|
| Line 19: | Line 19: | ||
| * **[[https:// | * **[[https:// | ||
| * [[https:// | * [[https:// | ||
| - | * **[[https:// | + | * **LSH Attention** |
| - | * Linearized Attention | + | |
| + | | ||
| * [[https:// | * [[https:// | ||
| * [[https:// | * [[https:// | ||
| * Random Feature Attention: [[https:// | * Random Feature Attention: [[https:// | ||
| + | * **[[https:// | ||
| + | * Early related work: [[https:// | ||
| * Single-Headed Gated Attention (SHGA): [[https:// | * Single-Headed Gated Attention (SHGA): [[https:// | ||
| + | * **Sparse Attention** | ||
| + | * Longformer | ||
| + | * BigBird | ||
| + | * Hierarchical Attention Transformers (HAT): [[https:// | ||
| + | * **MoE Sparse Attention** | ||
| + | * [[https:// | ||
| + | * [[https:// | ||
| + | * [[https:// | ||
| ===== Papers ===== | ===== Papers ===== | ||
nlp/attention_mechanisms.1690741687.txt.gz · Last modified: 2023/07/30 18:28 by jmflanig