No. I'm natural general intelligence with no safety features. One that has a hallucinationary view of the world an a neural architecture tracing back a couple of hundred of millions of years that continuously learns and has redundant features, and has the ability of self reflection.
We're talking about algorithms that are based on a simplified neural architecture, no redundancy, no self reflection and are still quite immature.
Nevertheless we're being asked to trust a black box AI, that you cannot interrogate?
It may well be that the human visual system is better for most tasks, but then that is what matters, which system is better for the task. The presence of an explanation doesn't matter. Neither system comes with an explanation.
On the other hand, if you place Magnus Carlsen against AlphaZero in a game of chess, I will bet on AlphaZero. If however you reduce the complexity of AlphaZero down to a level that it can produce an explanation I can understand, I would instead bet on Magnus Carlsen.
Of course we should care about the quality of AI systems, but chasing a human understandable explanation is just the wrong way to go about it, since it in many cases necessarily limits quality of the decisions.
We're talking about algorithms that are based on a simplified neural architecture, no redundancy, no self reflection and are still quite immature.
Nevertheless we're being asked to trust a black box AI, that you cannot interrogate?
Yes, of course, what's the worst that can happen?