User Tools

Site Tools


nlp:llm_safety

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
nlp:llm_safety [2025/05/30 17:02] – [Overviews] jmflanignlp:llm_safety [2026/03/07 22:18] (current) – [Papers] jmflanig
Line 5: Line 5:
   * [[https://arxiv.org/pdf/2308.05374|Liu et al 2023 - Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment]]   * [[https://arxiv.org/pdf/2308.05374|Liu et al 2023 - Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment]]
   * [[https://arxiv.org/pdf/2402.09283|Dong et al 2024 - Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey]]   * [[https://arxiv.org/pdf/2402.09283|Dong et al 2024 - Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey]]
-  * [[https://arxiv.org/pdf/2412.17686|Shi et al 2024 - Large Language Model Safety: A Holistic Survey]]+  * **[[https://arxiv.org/pdf/2412.17686|Shi et al 2024 - Large Language Model Safety: A Holistic Survey]]** Great survey 
 +  * [[https://arxiv.org/pdf/2501.17805|2025 - International AI Safety Report]] Safety for AI in general
  
 ===== Papers ===== ===== Papers =====
Line 12: Line 13:
   * [[https://arxiv.org/pdf/2404.12038|Xu et al 2024 - Uncovering Safety Risks in Open-source LLMs through Concept Activation Vector]]   * [[https://arxiv.org/pdf/2404.12038|Xu et al 2024 - Uncovering Safety Risks in Open-source LLMs through Concept Activation Vector]]
   * [[https://arxiv.org/pdf/2404.13208|Wallace et al 2024 - The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions]]   * [[https://arxiv.org/pdf/2404.13208|Wallace et al 2024 - The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions]]
 +  * [[https://arxiv.org/pdf/2508.06601|O'Brien et al 2025 - Deep Ignorance: Filtering Pretraining Data Builds Tamper-Resistant Safeguards into Open-Weight LLMs]]
  
 ===== Jailbraking LLMs ===== ===== Jailbraking LLMs =====
nlp/llm_safety.1748624541.txt.gz · Last modified: 2025/05/30 17:02 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki