User Tools

Site Tools


nlp:paper_examples

Examples of Good Papers

Not-So-Good Examples

Here are some papers that perhaps could have been better presented.

  • Vaswani et al 2017 - Attention Is All You Need The main issue with the paper is it doesn't give any ablation studies or experimental justification of why they did the things they did. Even if they didn't have the computing resources to do ablation studies, they still could have shown the incremental improvements as they added improvements (for example, as in this paper) The paper is also written in a way that is hard to follow: slowly revealing the architecture without giving a good high-level overview at the beginning. It also doesn't clearly specify which parts of the architecture are novel (they invented), and which parts are from somewhere else. For example, they invented multi-head attention, but they don't clearly state this.
  • DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning This paper is well-written, but has a major flaw: it is written ignoring prior work for what they have done. There is no related work section, and besides citing their own work, no citations in main body of the paper (Section 2: Approach). They are missing important prior work such as Havrilla - Teaching Large Language Models to Reason with Reinforcement Learning. This incorrectly makes it look like they invented everything.
nlp/paper_examples.txt · Last modified: 2025/05/31 18:40 by jmflanig

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki