Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. I'm natural general intelligence with no safety features. One that has a hallucinationary view of the world an a neural architecture tracing back a couple of hundred of millions of years that continuously learns and has redundant features, and has the ability of self reflection.

We're talking about algorithms that are based on a simplified neural architecture, no redundancy, no self reflection and are still quite immature.

Nevertheless we're being asked to trust a black box AI, that you cannot interrogate?

Yes, of course, what's the worst that can happen?



It may well be that the human visual system is better for most tasks, but then that is what matters, which system is better for the task. The presence of an explanation doesn't matter. Neither system comes with an explanation.

On the other hand, if you place Magnus Carlsen against AlphaZero in a game of chess, I will bet on AlphaZero. If however you reduce the complexity of AlphaZero down to a level that it can produce an explanation I can understand, I would instead bet on Magnus Carlsen.

Of course we should care about the quality of AI systems, but chasing a human understandable explanation is just the wrong way to go about it, since it in many cases necessarily limits quality of the decisions.


Again the non-sequitur argument of an explainable model must be worse than a deep learning system and there has to be a tradeoff.

You don’t need to reduce complexity to induce explainability. You just need to decompose the function into smaller parts which you can understand.

Contrastive LRP for example is a Function decomposition technique for explaining deep neural networks with high fidelity.


I respectably disagree.

Here's a paper that you may not have read.

https://arxiv.org/pdf/1806.00069.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: