nlp:attention_mechanisms
Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| nlp:attention_mechanisms [2025/03/13 22:08] – [Key Papers] jmflanig | nlp:attention_mechanisms [2025/04/04 23:53] (current) – [Key Papers] jmflanig | ||
|---|---|---|---|
| Line 19: | Line 19: | ||
| * **[[https:// | * **[[https:// | ||
| * [[https:// | * [[https:// | ||
| + | * **LSH Attention** | ||
| + | * [[https:// | ||
| * **Linearized Attention** | * **Linearized Attention** | ||
| * [[https:// | * [[https:// | ||
| Line 24: | Line 26: | ||
| * Random Feature Attention: [[https:// | * Random Feature Attention: [[https:// | ||
| * **[[https:// | * **[[https:// | ||
| + | * Early related work: [[https:// | ||
| * Single-Headed Gated Attention (SHGA): [[https:// | * Single-Headed Gated Attention (SHGA): [[https:// | ||
| * **Sparse Attention** | * **Sparse Attention** | ||
| Line 30: | Line 33: | ||
| * Hierarchical Attention Transformers (HAT): [[https:// | * Hierarchical Attention Transformers (HAT): [[https:// | ||
| * **MoE Sparse Attention** | * **MoE Sparse Attention** | ||
| + | * [[https:// | ||
| * [[https:// | * [[https:// | ||
| * [[https:// | * [[https:// | ||
nlp/attention_mechanisms.1741903714.txt.gz · Last modified: 2025/03/13 22:08 by jmflanig