User Tools

Site Tools


nlp:prompting

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:prompting [2025/06/02 00:28] – [Chain of Thought Prompting] jmflanignlp:prompting [2026/02/13 00:31] (current) – [Chain of Thought Prompting] jmflanig
Line 78: Line 78:
   * [[https://arxiv.org/pdf/2210.01240.pdf|Saparov & He 2022 - Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought]]   * [[https://arxiv.org/pdf/2210.01240.pdf|Saparov & He 2022 - Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought]]
   * [[https://arxiv.org/pdf/2210.03629.pdf|Yao et al 2022 - ReAct: Synergizing Reasoning and Acting in Language Models]] - The basis of LangChain   * [[https://arxiv.org/pdf/2210.03629.pdf|Yao et al 2022 - ReAct: Synergizing Reasoning and Acting in Language Models]] - The basis of LangChain
 +  * **[[https://arxiv.org/pdf/2211.12588|Chen et al 2022 - Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks]]**
   * [[https://arxiv.org/pdf/2305.04091.pdf|Wang et 2023 - Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models]]   * [[https://arxiv.org/pdf/2305.04091.pdf|Wang et 2023 - Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models]]
   * [[https://arxiv.org/pdf/2305.14992|Hao et al 2023 - Reasoning with Language Model is Planning with World Model]]   * [[https://arxiv.org/pdf/2305.14992|Hao et al 2023 - Reasoning with Language Model is Planning with World Model]]
Line 92: Line 93:
   * [[https://arxiv.org/pdf/2403.02178|Chen et al 2024 - Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models]] Masks the CoT to get better results   * [[https://arxiv.org/pdf/2403.02178|Chen et al 2024 - Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models]] Masks the CoT to get better results
   * [[https://arxiv.org/pdf/2502.15589|Zhang et al 2025 - LightThinker: Thinking Step-by-Step Compression]]   * [[https://arxiv.org/pdf/2502.15589|Zhang et al 2025 - LightThinker: Thinking Step-by-Step Compression]]
 +  * [[https://arxiv.org/pdf/2505.24217|Leng et al 2025 - Semi-structured LLM Reasoners Can Be Rigorously Audited]] William Cohen paper
   * **Analysis of Chain of Thought**   * **Analysis of Chain of Thought**
     * [[https://arxiv.org/pdf/2310.07923|Merrill & Sabharwal 2024 - The Expressive Power of Transformers with Chain of Thought]]     * [[https://arxiv.org/pdf/2310.07923|Merrill & Sabharwal 2024 - The Expressive Power of Transformers with Chain of Thought]]
Line 109: Line 111:
   * **Overviews**   * **Overviews**
     * [[https://arxiv.org/pdf/2304.08354.pdf|Qin et al 2023 - Tool Learning with Foundation Models]]     * [[https://arxiv.org/pdf/2304.08354.pdf|Qin et al 2023 - Tool Learning with Foundation Models]]
 +    * [[https://modelcontextprotocol.io/docs/getting-started/intro|Model Contex Protocol]] A standard introduced by Anthropic in 2024
   * [[https://arxiv.org/pdf/2210.03629.pdf|Yao et al 2022 - ReAct: Synergizing Reasoning and Acting in Language Models]]. This kind of thing is implemented in [[https://github.com/hwchase17/langchain|LangChain]]   * [[https://arxiv.org/pdf/2210.03629.pdf|Yao et al 2022 - ReAct: Synergizing Reasoning and Acting in Language Models]]. This kind of thing is implemented in [[https://github.com/hwchase17/langchain|LangChain]]
   * [[https://arxiv.org/abs/2302.04761|Schick et al 2023 - Toolformer: Language Models Can Teach Themselves to Use Tools]]   * [[https://arxiv.org/abs/2302.04761|Schick et al 2023 - Toolformer: Language Models Can Teach Themselves to Use Tools]]
nlp/prompting.1748824129.txt.gz · Last modified: 2025/06/02 00:28 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki