Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Again the non-sequitur argument of an explainable model must be worse than a deep learning system and there has to be a tradeoff.

You don’t need to reduce complexity to induce explainability. You just need to decompose the function into smaller parts which you can understand.

Contrastive LRP for example is a Function decomposition technique for explaining deep neural networks with high fidelity.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: