Hacker Newsnew | past | comments | ask | show | jobs | submit | tuxiano's commentslogin

My hypothesis is that learning chess is such a long path that people tend to forget how they were when they started and think their current level games are not interesting to share or stream. Pogchamps success demonstrates that a lot of people like chess as entertainment in addition to a mind challenge.


I can't speak for Go but in Chess the best players in the world understand the nuances of a position still better than the computer engines and - if occasionally proven wrong by the computer analysis - are able to understand the refutation and refine their strategic eveluation. I know this because it's what I've been doing in the past seven years in the realm of correspondence chess to gain the title of international master.


You are right if by "better" you mean "competitively stronger at tournament or rapid conditions". Humans are still way stronger strategically and competitively if given enough time and resources to avoid tactical mistakes. So yes, humans still provide unique insight into chess every day in correspondence chess or analytic research.


Humans aren't stronger strategically anymore either, under any conditions.

In 2014 a heavily handicapped Stockfish beat the 5th ranked player in the world (Nakamura) under tournament conditions despite no access to its opening or closing books and a one pawn handicap.


The match you are referring to was played under tournament conditions that clearly handicapped the human Grandmaster. I read from the report of the match [0] that "The total time for the match was more than 10 hours [...] The two decisive games lasted 147 and 97 moves." This unfavourable conditions clearly penalized the human and so the result can hardly be taken as meaningful regarding the strategic superiority. From the quietness of my room I instead regularly find strategic plans that overcome my and my opponent's computers. Feel free to join the correspondence chess federation [1] to experience the joy and pain of strategic research!

[0] https://www.chess.com/news/view/stockfish-outlasts-nakamura-... [1] www.iccf.com


The rating you are referring to are typically based on tournament or rapid games, where the limited time induces the human players to mistakes that the computer capitalizes on. Given enough time or with a “blunder check” option, the best human players are still strategically stronger. In correspondence chess, where the is much more time at disposal, the human players can still improve the computer suggestions.

Source: I’m a correspondence international chess master


Yeah I was thinking about classic or standard time controls. The last big cyborg tournament a few years ago I remember a computer coming in 1st and 2nd.

I wasn't thinking about correspondence but what was the latest large cyborg correspondence tournament?


I don't know the last one but I recall the matches of Hydra chess machine [0] in the early 2000s against GM Adams in tournament condition (5½ to ½ for the machine) and against GM Nickel in correspondence condition (2 to 0 for the human). Both Grandmaster were top players in their relative field so it showed very clearly how the time limitation impacted the competitive results. Nobody in the chess elite would claim that Hydra understood chess better than GM Adams but still he lost resoundigly due to the inevitable mistakes caused by the relatively fast time control.

[0] https://en.wikipedia.org/wiki/Hydra_(chess)


But wasn't Hydra 2005 ~2800 ELO where as the current best chess engines like Leela Chess Zero or Stockfish are ~4000 ELO?

Just realized that correspondence chess is cyborg chess, I didn't know computers were legal in correspondence chess, but it makes sense now. Reading about it, it sounds like it's less about knowing chess, and more about understanding the applications you're using.


Chess engine ratings are not immediately comparable to human ratings as they are extracted from different pools. Hydra played relatively few games so its rating estimation was somewhat approximate but it was clearly "superhuman" (GM Adams was n°7 in the world and only scored one draw in 6 games). Today Stockfish is awarded a rating of about 3500 [0] with a typical PC hardware but this rating comes from matches between engines and not with humans.

Regarding the argument of "knowing chess", it depends on you definition. I often use this analogy. Correspondence chess is to tournament chess what the marathon is to track running. They require different skills and training but I guarantee to you that a lot of understanding is involved in correspondence chess, possibly more than in tournament chess.

[0] https://ccrl.chessdom.com/ccrl/4040/


Oh I assumed it required quite a bit of chess knowledge and skill. But I assume what differentiates a good from great player isn't unassisted chess ability. Basically I'm wondering how well do correspondence ratings track with unassisted ratings. It was my understand they don't track really well at the higher levels of correspondence chess.


I strongly disagree. The best correspondence chess players often improve over the computer suggestions. It takes time, energy and a great strategic knowledge, but it’s still possibile.

Source: I’m a correspondence chess international master


I suppose he can’t because it isn’t true at all. The best correspondence players usually improve significantly over the computer suggestions.

Source: I’m a corrspondence chess international master


> The best correspondence players usually improve significantly over the computer suggestions.

I might be misunderstanding your claim, but how can humans playing correspondence chess beat Stockfish or Lc0?


In official correspondence games the computer assistance is allowed so most (if not all) of the players usually start their analysis with the computer suggestions (Stockfish, Lc0 or others). Some players limit themselves to this and play the engine's move, others try to improve with their own contribution. If no human contribution was possible, correspondence chess would become an hardware fight while results show that the best players can defeat "naive" opponents that rely on computer suggestions. In this sense, every correspondence chess win is a win over the opponent's hardware and engine.


Isn't it possible that you're not improving upon the engine's suggestions, but instead, your opponent is choosing suboptimal non-engine lines, and your engine is beating their weakened engine?


Occasionally it is possible. After seven years and more than one hundred games played I can tell you that I have been surprised by my opponent's reply not more than an handful of times. For "surprised" I mean he didn't play the top choice of the engine. In fact most of the times the best move in a given position is easily agreed on by any reasonable engine on any decent hardware. In few critical moments in the game, the best move is not clear and there are two or three or more playable alternatives that take into very different positions. In these cases the computer, after a long thought (one or more hours) usually converges to one suggestion and sticks to it even if given more time (a sort of "horizon effect"). These are the moments where a human, after a long thought, can overcome the computer suggestion and favor the 2nd or 3rd choice of the engine. So in brief no, I can't recall a game where I've been gifted the win by my opponent "weakened" move while most of the time I have confronted with the "engine's approved" suggestion and had to build my win by refuting it.


I assume that when you come across one of these novel moves, plug it into the computer, and give it time to search, it ultimately decides that it's superior?

Relatedly, can you give some examples of novel non-engine lines that turned out to be better than engine lines?


Sometimes if you play a move and the first plies (i.e. half moves) of the main variation the computer starts "understanding" and its score changes accordingly. Those are the cases where more hardware power could be useful and make the engine realize the change from the starting position. More often, the "non-engine" move relies on some blindness of the engine, so the computer starts understanding its strength only when it's too late. In these cases is unlikely that more power could bring benefits. Typical cases are

- fortresses [0]. One side has more material but the position can't be won by the superior side. As the chess rules declare the draw only after 50 moves without captures or pawn pushes, current engines can't look this far away and continue manouvering without realizing the blocked nature of the position. Some engines have been programmed to solve this problem but their overall strength decreases significantly.

- Threefold repetitions [1]. The engine believes the position is equal and move the pieces in - let me say - pseudorandom way. Only at some point it realizes the repetition can be avoided favourably by one side. Also this topic is frequently discussed in the programming forums but no clearcut solution has still emerged.

If you are looking for positions where human play is still better than engine's, the opening phase is the most fruitful. Most theoretical lines were born by human creativity and I doubt a chess engine will ever be able to navigate the intricacies of the Poisoned Pawn Variation of the Sicilian Najdorf [2] or the Marshall Attack of the Ruy Lopez [3]. Neural networks engines are strategically stronger than classical AB programs in the opening phase but they suffers from occasional tactical blindness. Engine-engine competitions often use opening books to force the engines to play a prearranged variation to increase the variabililty and reduce the draw percentage.

[0] https://en.wikipedia.org/wiki/Fortress_(chess) [1] https://en.wikipedia.org/wiki/Threefold_repetition [2] https://en.wikipedia.org/wiki/Poisoned_Pawn_Variation [3] https://en.wikipedia.org/wiki/Ruy_Lopez#Marshall_Attack


I'm interested because the experience in Go is humans simply can't keep up.

What is the evidence that it isn't a hardware or software differential between the players? I can't think of an easy way to ensure that both players started with computer-suggested moves of the same quality.


There are a lot of engines with rating on the chart way higher than the best humans, so every suggestion on their part should be in theory enough to overcome any human opponent. In practice most (if not all) of the players rely on Stockfish and Lc0 (both open source). During a game, most of the time the "best" move is easily agreed on by every reasonable engine on any decent hardware. Only in few cases during a game, the position offers two or three or more playable choices. In these cases a stronger hardware or a longer thought rarely makes the computer change his idea. It's a sort of horizon effect where more power doesn't translate into a really better analysis.

For example in a given position you could have 3 moves M1 - a calm continuation with a good advantage M2 - an exchange sacrifice (a rook for a bishop or a knight) for an attack M3 - a massive exchange of pieces entering into a favorable endgame. If the three choices are so different, the computer usually can't dwell enough to settle on a clear best move. Instead the human can evaluate the choices until one of them shows up as clearly best (for example the endgame can be forcefully won). In these cases the computer suggestion becomes almost irrelevant and only a naive player would make the choice on some minimal score difference (that can unpredictably vary on hardware, software version or duration of analysis). So the quality of the starting suggestion is somehow irrelevant if you plan to make a thoughtful choice.


python in this world, lisp in paradise


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: