I'm cautiously optimistic about these style of interpretability works. They seem to tell us something without actually telling us. The ones in the past did not inspire to build better models. Translating a neural net into a decision tree makes interpretation no more useful tbh. Who knows! Do what you like. Good luck for everyone in finding these useful.
Chris Olah always does such interesting work of really teasing apart ML abstractions and thinking about them in new ways. One of the best ML researchers in my book!