Explainability can be crucial for the adoption of automatic methods. For example, without an explaination for the diagnosis, doctors are highly unlikely to use an automatic diagnosis system. Explainability is an open problem for machine learning and NLP (see open problems). See also Wikipedia - Explainable AI.
Jeff's opinion: I have reservations about the gradient-based methods because a small effect of an infinitesimal change doesn't necessarily mean it isn't important - it could be important but saturate the activation function to produce a flat spot in the gradient. I prefer methods like Li et al 2016 - Understanding Neural Networks through Representation Erasure and Burns et al 2019 - Interpreting Black Box Models via Hypothesis Testing.
Overview blog post: 2020 - Making Decision Trees Accurate Again: Explaining What Explainable AI Did Not
Overview: Jacovi 2020