Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Think more in terms of function decomposition rather than having to look at each individual parameter and you will find papers and techniques which lead to deep neural networks being quite explainable.

Contrastive LRP would be a good starting point for you for generating high fidelity explanations at any point in the network.



I am very aware of these, and I like methods like CLRP. It's great to have these tools to debug neural networks.

However, for many that argue for explainable AI, CLRP falls way short of what they want. In particular, the symbolic AI crowd would scoff at it. This is the crux of the issue in my eyes, that the symbolic AI crowd has taken "explainability" as a way to justify methods that don't work.

I have no issue with methods that allow greater understanding of neural net internals, that's essentially what all neural net researchers spend all their time on (and it's the path towards better performing methods).


Interested in seeing where symbolic AI crowd has disagreed with that. Only group of people who disagree that I know of are a small set of people who think you should build inherently explainable models as opposed to explaining the decisions of deep neural networks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: