User Tools

Site Tools


nlp:agi

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:agi [2025/03/20 04:11] – [Related Pages] jmflanignlp:agi [2025/06/21 01:54] (current) – [Papers] jmflanig
Line 8: Line 8:
   * [[https://zoo.cs.yale.edu/classes/cs671/12f/12f-papers/adams-agi-landscape.pdf|Adams et al 2012 - Mapping the Landscape of Human-Level Artificial General Intelligence]] Good overview from 2012   * [[https://zoo.cs.yale.edu/classes/cs671/12f/12f-papers/adams-agi-landscape.pdf|Adams et al 2012 - Mapping the Landscape of Human-Level Artificial General Intelligence]] Good overview from 2012
   * [[https://arxiv.org/pdf/2208.12852|Michael et al 2022 - What Do NLP Researchers Believe? Results of the NLP Community Metasurvey]] (Contains questions about AGI)   * [[https://arxiv.org/pdf/2208.12852|Michael et al 2022 - What Do NLP Researchers Believe? Results of the NLP Community Metasurvey]] (Contains questions about AGI)
 +  * [[https://arxiv.org/pdf/2402.03962|Altmeyer et al 2024 - Position: Stop Making Unscientific AGI Performance Claims]]
   * [[https://arxiv.org/pdf/2405.10313|Feng et al 2024 - How Far Are We From AGI: Are LLMs All We Need?]]   * [[https://arxiv.org/pdf/2405.10313|Feng et al 2024 - How Far Are We From AGI: Are LLMs All We Need?]]
 +  * [[https://arxiv.org/pdf/2502.03689|Blili-Hamelin et al 2025 - Stop treating ‘AGI’ as the north-star goal of AI research]]
 ===== Safety ===== ===== Safety =====
   * [[https://gcrinstitute.org/papers/033_agi-survey.pdf|Baum et al 2017 - A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy]]   * [[https://gcrinstitute.org/papers/033_agi-survey.pdf|Baum et al 2017 - A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy]]
Line 20: Line 21:
   * [[https://arxiv.org/pdf/2304.06364|Zong et al 2023 - AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models]] This paper has > 300 citations (as of 2025) and was accepted to ACL Findings.  I can't believe our crappy review process.   * [[https://arxiv.org/pdf/2304.06364|Zong et al 2023 - AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models]] This paper has > 300 citations (as of 2025) and was accepted to ACL Findings.  I can't believe our crappy review process.
   * Humanity's Last Exam   * Humanity's Last Exam
 +  * ARC-AGI-2 (2025): [[https://arxiv.org/pdf/2505.11831|paper]], [[https://arcprize.org/blog/announcing-arc-agi-2-and-arc-prize-2025|website]]
  
 ===== Related Pages ===== ===== Related Pages =====
   * [[Alignment]]   * [[Alignment]]
   * [[LLM Safety]]   * [[LLM Safety]]
nlp/agi.1742443896.txt.gz · Last modified: 2025/03/20 04:11 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki