User Tools

Site Tools


nlp:language_model

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:language_model [2025/07/18 06:24] – [Large Language Models] jmflanignlp:language_model [2026/03/07 22:21] (current) – [Extracting Knowledge from Language Models] jmflanig
Line 26: Line 26:
       * **[[https://arxiv.org/pdf/2303.18223.pdf|Zhao et al 2023 - A Survey of Large Language Models]]**       * **[[https://arxiv.org/pdf/2303.18223.pdf|Zhao et al 2023 - A Survey of Large Language Models]]**
       * [[https://arxiv.org/pdf/2404.09022|Weng 2024 - Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning Strategies]]       * [[https://arxiv.org/pdf/2404.09022|Weng 2024 - Navigating the Landscape of Large Language Models: A Comprehensive Review and Analysis of Paradigms and Fine-Tuning Strategies]]
 +      * **[[https://arxiv.org/pdf/2501.17805|2025 - International AI Safety Report]]** (Has a good non-technical overview of AI, ML & LLMs)
   * **Language models in the news, etc**   * **Language models in the news, etc**
     * [[https://www.wired.com/story/ai-text-generator-gpt-3-learning-language-fitfully/|Wired - GPT-3]]     * [[https://www.wired.com/story/ai-text-generator-gpt-3-learning-language-fitfully/|Wired - GPT-3]]
Line 102: Line 103:
 | [[https://arxiv.org/pdf/2403.19887|Jamba]] | 2024 | 52B | | Yes | [[https://www.ai21.com/blog/announcing-jamba|blog]] [[https://huggingface.co/ai21labs/Jamba-v0.1|HuggingFace]] | | [[https://arxiv.org/pdf/2403.19887|Jamba]] | 2024 | 52B | | Yes | [[https://www.ai21.com/blog/announcing-jamba|blog]] [[https://huggingface.co/ai21labs/Jamba-v0.1|HuggingFace]] |
 | [[https://arxiv.org/pdf/2404.14619|OpenELM]] | 2024 | 1.1B | | Yes | | | [[https://arxiv.org/pdf/2404.14619|OpenELM]] | 2024 | 1.1B | | Yes | |
 +| [[https://arxiv.org/pdf/2507.20534|Kimi K2]] | 2025 | 1T | | Yes | |
 | | | | | | | | | | | | | |
  
Line 129: Line 131:
   * **Overviews**   * **Overviews**
     * [[https://arxiv.org/pdf/2307.03109|Chang et al 2023 - A Survey on Evaluation of Large Language Models]]     * [[https://arxiv.org/pdf/2307.03109|Chang et al 2023 - A Survey on Evaluation of Large Language Models]]
 +    * For common evaluation datasets for LLMs, see recent LLM system description papers such as the [[https://arxiv.org/pdf/2407.21783|LLama 3 paper]] (table 2) or [[https://www.anthropic.com/news/claude-sonnet-4-5|Claude Sonnet 4.5]] (evaluation table).
   * lm-evaluation-harness: [[https://github.com/EleutherAI/lm-evaluation-harness|LM Evaluation Harness (EleutherAI)]] (Released May 2021)   * lm-evaluation-harness: [[https://github.com/EleutherAI/lm-evaluation-harness|LM Evaluation Harness (EleutherAI)]] (Released May 2021)
   * [[https://arxiv.org/pdf/2401.00595|Mizrahi et al 2024 - State of What Art? A Call for Multi-Prompt LLM Evaluation]]   * [[https://arxiv.org/pdf/2401.00595|Mizrahi et al 2024 - State of What Art? A Call for Multi-Prompt LLM Evaluation]]
Line 136: Line 139:
   * **Effects of Length and Irrelevant Context**   * **Effects of Length and Irrelevant Context**
     * [[https://arxiv.org/pdf/2402.14848|Levy et al 2024 - Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models]]     * [[https://arxiv.org/pdf/2402.14848|Levy et al 2024 - Same Task, More Tokens: the Impact of Input Length on the Reasoning Performance of Large Language Models]]
 +
 +===== Tool-Use in LLMs =====
 +See also [[prompting#Chained or Tool-based Prompting]].
 +  * **Overviews and Background**
 +    * [[https://modelcontextprotocol.io/docs/getting-started/intro|Model Contex Protocol]]
 +
 +===== Retrieval-Augmented Generation (RAG) =====
 +See [[Retrieval-Augmented Methods]].
 +
 +===== Limitations of Current LLMs =====
 +  * [[https://aclanthology.org/2025.acl-long.1016.pdf|Shaikh et al 2025 - Navigating Rifts in Human-LLM Grounding: Study and Benchmark]]
  
 ===== Questions and Critiques of LLMs ===== ===== Questions and Critiques of LLMs =====
Line 167: Line 181:
   * Extracting Training Data   * Extracting Training Data
     * [[https://arxiv.org/pdf/2012.07805.pdf|Carlini et al 2020 - Extracting Training Data from Large Language Models]] [[https://github.com/ftramer/LM_Memorization|github]]     * [[https://arxiv.org/pdf/2012.07805.pdf|Carlini et al 2020 - Extracting Training Data from Large Language Models]] [[https://github.com/ftramer/LM_Memorization|github]]
 +    * [[https://arxiv.org/pdf/2601.02671|Ahmed et al 2026 - Extracting Books from Production Language Models]]
   * Membership Inference for Training Data   * Membership Inference for Training Data
     * (Decide if some sample data is in the training data or not)     * (Decide if some sample data is in the training data or not)
Line 233: Line 248:
     * [[https://arxiv.org/pdf/2202.07105|Xu & McAuley et al 2022 - A Survey on Model Compression and Acceleration for Pretrained Language Models]]     * [[https://arxiv.org/pdf/2202.07105|Xu & McAuley et al 2022 - A Survey on Model Compression and Acceleration for Pretrained Language Models]]
     * **[[https://arxiv.org/pdf/2312.03863|Wan et al 2023 - Efficient Large Language Models: A Survey]]** Updated continuously.  **See paper list [[https://github.com/AIoT-MLSys-Lab/Efficient-LLMs-Survey|here]]**     * **[[https://arxiv.org/pdf/2312.03863|Wan et al 2023 - Efficient Large Language Models: A Survey]]** Updated continuously.  **See paper list [[https://github.com/AIoT-MLSys-Lab/Efficient-LLMs-Survey|here]]**
 +
 +===== Economics of LLMs =====
 +  * [[https://arxiv.org/pdf/2306.07402|Howell et al 2023 - The Economic Trade-offs of Large Language Models: A Case Study]]
  
 ===== Miscellaneous ===== ===== Miscellaneous =====
nlp/language_model.1752819893.txt.gz · Last modified: 2025/07/18 06:24 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki