====== Neurosymbolic Methods ===== ===== Overviews ===== * See related work (section 2) of [[https://www.aclweb.org/anthology/2020.emnlp-main.453.pdf|Wu 2020]] * [[https://arxiv.org/pdf/2012.05876|d'Avila Garcez & Lamb 2020 - Neurosymbolic AI: The 3rd Wave]] * [[https://arxiv.org/pdf/2010.05446.pdf|Zhang et al 2020 - Neural, Symbolic and Neural-Symbolic Reasoning on Knowledge Graphs]] * [[https://arxiv.org/pdf/2305.00813|Sheth et al 2023 - Neurosymbolic AI - Why, What, and How]] * [[https://arxiv.org/pdf/2501.05435|Colelough & Regli 2025 - Neuro-Symbolic AI in 2024: A Systematic Review]] * **Tutorials** * [[https://ns4nlp-coling.github.io/|NS4NLP: Neuro-Symbolic Modeling for NLP - COLING 2022 Tutorial]] ===== Papers ===== * [[ml:modularity#Neural Module Networks]] * **[[https://arxiv.org/pdf/1612.00712.pdf|Murray & Krishnamurthy 2016 - Probabilistic Neural Programs]]** * [[https://arxiv.org/pdf/1603.06318.pdf|Hu et al 2016 - Harnessing Deep Neural Networks with Logic Rules]] * [[https://arxiv.org/pdf/1711.04574.pdf|Evans & Grefenstette 2017 - Learning Explanatory Rules from Noisy Data]] Introduces Differentiable Inductive Logic Programming, which is trained using backpropagation. DILP provides "data efficiency and generalisation beyond what neural networks can achieve on their own." * [[https://arxiv.org/pdf/1802.08535.pdf|Evans et al 2018 - Can Neural Networks Understand Logical Entailment?]] * [[https://www.aclweb.org/anthology/D18-1215.pdf|Wang & Poon 2018 - Deep Probabilistic Logic: A Unifying Framework for Indirect Supervision]] (summary [[https://www.aclweb.org/anthology/attachments/K19-1065.Supplementary_Material.pdf|here]], from [[https://www.aclweb.org/anthology/K19-1065.pdf|Wang 2019]]) * [[https://www.aclweb.org/anthology/K19-1065.pdf|Wang et al 2019 - Evidence Sentence Extraction for Machine Reading Comprehension]] Uses Deep Probabilistic Logic * [[https://jair.org/index.php/jair/article/view/11944/26561|Cohen et al 2020 - Tensorlog: A probabilistic database implemented using deep-learning infrastructure]] * [[https://www.aclweb.org/anthology/2020.emnlp-main.453.pdf|Wu et al 2020 - Deep Weighted MaxSAT for Aspect-based Opinion Extraction]] See the related work * [[https://arxiv.org/pdf/2203.04857.pdf|Feng et al 2022 - Neuro-symbolic Natural Logic with Introspective Revision for Natural Language Inference]] * [[https://arxiv.org/pdf/2208.05051.pdf|Qian et al 2022 - Limitations of Language Models in Arithmetic and Symbolic Induction]] * [[https://aclanthology.org/2022.naacl-main.341.pdf|West et al 2022 - Symbolic Knowledge Distillation: from General Language Models to Commonsense Models]] * [[https://arxiv.org/pdf/2211.11559|Gupta & Kembhavi 2022 - Visual Programming: Compositional visual reasoning without training]] * [[https://arxiv.org/pdf/2305.12744|Pan et al 2023 - Fact-Checking Complex Claims with Program-Guided Reasoning]] * [[https://arxiv.org/pdf/2305.12295|Pan et al 2023 - Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning]] * [[https://arxiv.org/pdf/2310.05253|Wang & Shu 2023 - Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models]] * [[https://arxiv.org/pdf/2310.16035.pdf|Hsu et al 2023 - What’s Left? Concept Grounding with Logic-Enhanced Foundation Models]] * **[[https://arxiv.org/pdf/2506.04592|Liu et al 2025 - Safe: Enhancing Mathematical Reasoning in Large Language Models via Retrospective Step-aware Formal Verification]]** ===== People ===== * [[https://scholar.google.com/citations?user=8ys-38kAAAAJ&hl=en|William W. Cohen]] ===== Related Pages ===== * [[Knowledge-Enhanced Methods]] * [[Logic in NLP]] * [[ml:modularity#Neural Module Networks]] * [[ml:Program Induction#Neural Program Induction]] * [[ml:Probabilistic Logic]]