game review no explanation


I'm testing out diamond membership free trial for like a week because i want to see if im interested in it. for some moves in game review it will say (so and so move) is best. yet it doesnt give a description of the move, how the move is best, or the idea. Once again ive never paid for diamond membership but im on the second day of the free trial. I know some moves the ai will tell you like 2 or 3 sentences why a move is better but some moves it wont say anything besides why my move was a mistake. My issue is some moves it wont elaborate and idk if its a bug or if its normal but i would like feedback
Move Explanations won't always be in major detail. The code is trying translate the engine line into human explanations and that's sometimes very hard to do. The code is constantly being worked on but if the explanation isn't enough, you should be able to Show Line

The whole thing is programming hogwash. Player A and Player B were on another server with ratings calculated there as 1738 and 1775 respectively. Their game was imported here and reviewed. Player A received a Game Review score estimated as 1900. Player A and their friend Player C then partner to replay the game as notated, move for move on this server, both accounts rated near 1300. After review, Player A's performance becomes 1450. It should be noted the program should not- and did not- consider time expense as a factor in the valuation of a chess move- as it is established that data is not available on the PGN imported from the other server anyway. This experiment was conducted only after a long series of evaluations across ratings samples was already suspect. Please do not send an egghead back to me claiming the internal programming is scientifically objective.

ahh, thanks for feedback. Just I dont understand what stockfish wants me to do by certain moves
"Game review" is useless for this. Switch to the "Analysis" tab, and there you can see all the moves with engine evaluations, plus you can try any move for both sides, go back and forth as you wish.

The whole thing is programming hogwash. Player A and Player B were on another server with ratings calculated there as 1738 and 1775 respectively. Their game was imported here and reviewed. Player A received a Game Review score estimated as 1900. Player A and their friend Player C then partner to replay the game as notated, move for move on this server, both accounts rated near 1300. After review, Player A's performance becomes 1450. It should be noted the program should not- and did not- consider time expense as a factor in the valuation of a chess move- as it is established that data is not available on the PGN imported from the other server anyway. This experiment was conducted only after a long series of evaluations across ratings samples was already suspect. Please do not send an egghead back to me claiming the internal programming is scientifically objective.
The estimated player ratings for a game are just that, estimates. The algorithm takes into account the players to ratings and their respective accuracies in the game.
So, a pair of players with different ratings played the exact same game, the estimates will be different, as you found. I don't think anyone from the site has ever claimed that feature is some kind of absolute true rating of the players involved.

Oh yes. I totally agree. There is no reason not to trust raw evaluations from known engines familiar to the world. Our study simply exposes the fact that as a commercialism lure, the server creates independent product on the illusion that great and exhaustive work goes into generating an environment with its own user-friendly innovations for chess study expounded beyond what a mere engine can tell you. This is fraudulently untrue. I certainly agree, stick with Stockfish.

Obviously the translation of an Elo figure or any other number from a singular chess game is an estimate that is likely flawed. We all know this and it is not our point. Our point is the systematic discrepancy. Thank you Martin you have fully corroborated the point. Fischer and I move same pieces. Bobby gets 2700. You or I get 1200. There ya go. I hope others understand this now. A chess move is a chess move is a chess move. The pieces don't know what your historic skill level was supposed to be and neither does an engine figure for it either. The Game Review complicates a simpler science into meaningless data. I truly appreciated your reply.

Game accuracy and estimated rating are more or less toys, not serious tools. It is interesting to see them, but one shouldn't put too much faith in those.
They changed how accuracy work in order, mostly, to encourage weaker players by not showing them 18 accuracy and similar (I've checked some of the games I played a couple of years ago, and lo and behold I got over 50 for even pretty terrible games). Now almost everyone have 60-90 accuracy, only a small fraction of games are bellow or over these figures.
Plus, many people are suspicious of cheating because lower rated people are capable of getting like 80-85 accuracy in some of their games, so I can only imagine how many more cheating reports fair play team gets as a result of this.
As an example of how these tools are unreliable, quite some time ago, I played an unrated game against someone rated around 600. The opponent blundered 3 times in around 10 moves, resigned afterwards and got almost 70 accuracy for such a bad game, and his estimated rating was around 1 300 or so. This is obviously ridiculous, but it is what it is.

Oh yes. I totally agree. There is no reason not to trust raw evaluations from known engines familiar to the world. Our study simply exposes the fact that as a commercialism lure, the server creates independent product on the illusion that great and exhaustive work goes into generating an environment with its own user-friendly innovations for chess study expounded beyond what a mere engine can tell you. This is fraudulently untrue. I certainly agree, stick with Stockfish.
Game Review, for most people in my opinion, has the best value around the score graph, move classifications, and explanations. There is continuous work around the review process, a lot around handling classifications and move explanations. Report card, is interesting, but not particularly useful for most purposes

ahh, thanks for feedback. Just I dont understand what stockfish wants me to do by certain moves
Basically you have to figure out yourself. There is no magic wand at the moment that will tell you this move is bad because of this, and this and this.
Instead, look at the move (at that rating try at least to understand big rating shifts in estimation).If it is not obvious from the first move, make some more moves. That way you might be able to understand (perhaps a certain move is a blunder because there is a 2 move combination that traps the piece, or something to that effect that doesn't show on the first move.
At this moment, there is a good chance you will not concretely understand why at a certain position it says +1.8 for instance.
You need to learn about chess yourself by analyzing more (by going through your games) and learning passively through videos, books or any other sources out there. That way you will be able to analyze the games better and understand more and more from the game analysis.

Game accuracy and estimated rating are more or less toys, not serious tools. It is interesting to see them, but one shouldn't put too much faith in those.
They changed how accuracy work in order, mostly, to encourage weaker players by not showing them 18 accuracy and similar (I've checked some of the games I played a couple of years ago, and lo and behold I got over 50 for even pretty terrible games). Now almost everyone have 60-90 accuracy, only a small fraction of games are bellow or over these figures.
Plus, many people are suspicious of cheating because lower rated people are capable of getting like 80-85 accuracy in some of their games, so I can only imagine how many more cheating reports fair play team gets as a result of this.
As an example of how these tools are unreliable, quite some time ago, I played an unrated game against someone rated around 600. The opponent blundered 3 times in around 10 moves, resigned afterwards and got almost 70 accuracy for such a bad game, and his estimated rating was around 1 300 or so. This is obviously ridiculous, but it is what it is.
Yes. It is what it is. My favorite word you used in your post was "toy." That word is key to my own point. My friends and I are not trying to prosecute the moon...only make sure other players know what it is they are looking at. One must realize that- in some ridiculously imaginary scenario- I was playing Fischer...and said "Sorry Bobby, here is my Tal-like combinative brilliancy that leaves you dumbfounded by a forced M8"...Bobby wouldn't care I was only "supposed to be" 1300. No TD would be called in to invalidate my move. That scenario too would be an "is what it is" thing. The programming should not place the Elo in a higher hierarchy than the chess move itself. As it is, yes it's just a very useless "toy "

Game accuracy and estimated rating are more or less toys, not serious tools. It is interesting to see them, but one shouldn't put too much faith in those.
They changed how accuracy work in order, mostly, to encourage weaker players by not showing them 18 accuracy and similar (I've checked some of the games I played a couple of years ago, and lo and behold I got over 50 for even pretty terrible games). Now almost everyone have 60-90 accuracy, only a small fraction of games are bellow or over these figures.
Plus, many people are suspicious of cheating because lower rated people are capable of getting like 80-85 accuracy in some of their games, so I can only imagine how many more cheating reports fair play team gets as a result of this.
As an example of how these tools are unreliable, quite some time ago, I played an unrated game against someone rated around 600. The opponent blundered 3 times in around 10 moves, resigned afterwards and got almost 70 accuracy for such a bad game, and his estimated rating was around 1 300 or so. This is obviously ridiculous, but it is what it is.
Yes. It is what it is. My favorite word you used in your post was "toy." That word is key to my own point. My friends and I are not trying to prosecute the moon...only make sure other players know what it is they are looking at. One must realize that- in some ridiculously imaginary scenario- I was playing Fischer...and said "Sorry Bobby, here is my Tal-like combinative brilliancy that leaves you dumbfounded by a forced M8"...Bobby wouldn't care I was only "supposed to be" 1300. No TD would be called in to invalidate my move. That scenario too would be an "is what it is" thing. The programming should not place the Elo in a higher hierarchy than the chess move itself. As it is, yes it's just a very useless "toy "
As for the toy word, I must admit it wasn't my invention, so I can't take credit for it. I think I saw @magipi use it in some thread for this same topic, though I am not completely sure. I like it as well, it is pretty fitting I would say.
And I have to admit, I like to see accuracy and estimated rating after I play the game, even though I know they are not to be taken too seriously.

Oh yes. I totally agree. There is no reason not to trust raw evaluations from known engines familiar to the world. Our study simply exposes the fact that as a commercialism lure, the server creates independent product on the illusion that great and exhaustive work goes into generating an environment with its own user-friendly innovations for chess study expounded beyond what a mere engine can tell you. This is fraudulently untrue. I certainly agree, stick with Stockfish.
Game Review, for most people in my opinion, has the best value around the score graph, move classifications, and explanations. There is continuous work around the review process, a lot around handling classifications and move explanations. Report card, is interesting, but not particularly useful for most purposes
I can be willing to concede that Score Graph does have a usefulness in that it is patterned after what is already produced and is popular in the tracking of live athletic events (eg, Super Bowl and such) which can now publish an up to the second "estimated chance of winning" in just about any sport in the form of such a graph. It's also useful in that, ahmmm, it does directly reflect engine evaluation with no prejudice for historic skill level.

As you may have noticed within the thread, two other players and myself, all with programming backgrounds, did a little light research on this feature. We appreciate a staff member's transparency in the admission of Elo being weighed as a factor in the Review's results...but remain dumbfounded by whomever gave the programmers the authority to structure it in such way. Any long yarn regarding chess speak or program coding vernacular to rationalize this logic is not over most of our heads and only digs the hole deeper. The attempt is noble to translate a game performance into a familiar longer term figure. To apply past performance in the assessment of a solid state board position (or series thereof) is absolutely ludicrous. Since there is no reason not to trust whatever other factors are presumably included to produce the "pseudo Elo", I would recommend this...... the PGN can be edited once downloaded from another property. Therefore "share" your game as to upload a copy to your own device...then go back into Learn/Analysis and download your game there. When you see the PGN you can then make your side's Elo read whatever number you wish to make it say. Run Game Review from there. It will then assess yours and your opponent's same chess moves to be supposedly representative of "X."
I'm testing out diamond membership free trial for like a week because i want to see if im interested in it. for some moves in game review it will say (so and so move) is best. yet it doesnt give a description of the move, how the move is best, or the idea. Once again ive never paid for diamond membership but im on the second day of the free trial. I know some moves the ai will tell you like 2 or 3 sentences why a move is better but some moves it wont say anything besides why my move was a mistake. My issue is some moves it wont elaborate and idk if its a bug or if its normal but i would like feedback