Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I say if the adversarial strategy is learnable by human players (even if humans didn't invent it), then the battle isn't over yet. Especially if the same adversarial strategy works against most or all AI opponents. The battle is really over when there are no more adversarial strategies simple enough for humans to use.


The interesting thing is that the adversarial strategy can be explained in a few sentences to any amateur Dan level player in a sufficient manner to use it successfully, assuming to be sufficiently strong.

When learning to play Go, you go to a number of steps, where sometimes you have to unlearn something you learned in a previous step. I am a weak amateur player (around 10k) and I only have a vague idea about aji (taste in Japanese) and no idea what good aji and bad aji is. I even don't master life and death. When AIs started to beat professional Go players, it seemed that they had a good understanding of aji, because they created good aji and avoided bad aji. They also played a very balanced game, overseeing the whole board, something that non-professional players often fail to do, and now the adversary strategy reveals that the AIs do not even have a good understanding of life and death and even fall for 'tricks' that I would have been able to counter.


In chess, it is possible that humans and bots can both someday play perfectly, and every game will be a draw. (Young people might play chess until learning how to draw, and then move on.)

For Go there is no such possible outcome, and deep reading (calculation) will always be in the machine's favor. The architecture of AlphaGo (policy and evaluation networks plus MCTS) is sufficient to recreate and surpass any kind of human-practical strategy. The era of human dominance in Go is simply over.

Of course, current AI programs will never be perfect either, so there may continue to be such adversarial attacks, but they will necessarily become harder to use (and quickly impossible for humans). AI go programs probably have nearly infinite possibility to continue marginally improving against each other, though.


> The architecture of AlphaGo (policy and evaluation networks plus MCTS) is sufficient to recreate and surpass any kind of human-practical strategy

This is an assertion without evidence. Leela Zero uses the architecture of AlphaGo and yet this guy just beat it with a human-practical strategy. I have no doubt that this can be fixed, but exactly how to fix it is not obvious and it may require more compute, and that's interesting.


Humans have two ways to pick a move: look at the board and pick a move based on the position (pattern matching, intuition, etc) and calculation (tree search). Machines can already do both better. That is my evidence. There isn't any room left in this process for some uniquely human ability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: