Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's mentioned briefly in the paper(1), but I'm more interested in the interpretability implications of this approach. In some respects, this marries the interpretability/editability of a small decision tree with the expressive power of a large neural network. Usually you see those two on extreme opposite ends of a tradeoff spectrum - but this approach, if it scales, might shift the pareto frontier.

(1): As a byproduct, the learned regions can also be used as a partition of the input space for interpretability, surgical model editing, catastrophic forgetting mitigation, reduction of replay data budget, etc..



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: