Story time: I was playing with the kaggle lending club dataset and getting <i>really</i> high accuracy (high 90s) predicting default with an out of the box sklearn model. Just for fun I ran it through LIME and discovered that every single default was <i>strongly</i> predicted by the "recoveries" feature. I looked into the data dictionary (yeah, I should have done so first...) and discovered that this feature indicates the amount of debt recovered by collections agencies...
I've found LIME extremely useful. When I'm doing ML consulting projects, my clients very often value more the ability to describe why it was such prediction than improvement in accuracy of the model by 0.05%