How accurate is the Chessmaster Grandmaster Edition bot elo??

Sort:
jjlai1111

I've just played a game vs Cal rated 1561 and here is the game.

Is this a 1561 elo level??? I think is just 1050-1100 elo, giving 2 minor pieces, and giving a checkmate.

Alemor64

if he has  just started playing on Chess.com , the ELO isn't still accurate. edit :  Ah it is a chess bot, you are right wink.png

jjlai1111

the bot elo is not accurate in chessmaster grandmaster edition, keep giving free pieces.

EscherehcsE
jjlai1111 wrote:

I've just played a game vs Cal rated 1561 and here is the game.

Is this a 1561 elo level??? I think is just 1050-1100 elo, giving 2 minor pieces, and giving a checkmate.

The Chessmaster personalities are usually fairly close to reality, but in some cases they can be hundreds of rating points off in either direction. It looks like Cal is probably one of those latter cases. In general, I think the Josh personalities should be fairly close to right.

TsetseRoar

Simulating a human player's play style is actually quite a challenging problem for AI.

Many bots in many programs simply hang pieces, but play the opening and endgame pretty flawlessly and overall it balances out. On chess.com for example, the bots hang pieces but by the time I face an 1800 bot it nevertheless gets hard. I recommend you play a bot closer or above your level. It may drop pieces, but can you consistently beat it?

Incidentally, there is now a deep learning AI that apparently makes plans and errors similar to equivalent rated humans: Maia.

EscherehcsE
TsetseRoar wrote:

Simulating a human player's play style is actually quite a challenging problem for AI.

Many bots in many programs simply hang pieces, but play the opening and endgame pretty flawlessly and overall it balances out. On chess.com for example, the bots hang pieces but by the time I face an 1800 bot it nevertheless gets hard. I recommend you play a bot closer or above your level. It may drop pieces, but can you consistently beat it?

Incidentally, there is now a deep learning AI that apparently makes plans and errors similar to equivalent rated humans: Maia.

This is interesting. I may check it out soon if I can get some spare time. Thanks for the link.

jjlai1111

I play Maia in lichess, you can type 'fast' or 'slow'

and ofc the Maia1 is the easiest and Maia9 is the hardest.

zlatkod168
jjlai1111 wrote:

I've just played a game vs Cal rated 1561 and here is the game.

Is this a 1561 elo level??? I think is just 1050-1100 elo, giving 2 minor pieces, and giving a checkmate.

Yes, they are overrated. Even the personalities in the 1700s give a minor piece for a pawn, and then the matter is to finish the game without blundering. The highest rated personality that I beat there had 2395 rating, while here I can't move beyond 1600 happy.png

aviation18

Nice

jjlai1111

Ok

Bruno5979

French translation by Google

Hello,

I also have Chessmaster version 10 and 11.
In December 2020 I recreated several profiles to have the same rates as on Chess.com where I have been registered since October 2020.
So I created Bruno 2 + 1 Bruno 5 + 5 Bruno 15 + 10 and Bruno> 15 + 10.
I played a lot in December which allowed me to draw conclusions. Admittedly this is not a scientific study since I am the only user but it is already that.

Overall the level is consistent I think at least for my level. Indeed on Chess.com I am at about 1350 (sometimes I fall back to 1300) in "Fast" or I almost exclusively do only 30 '. On Chessmaster in 15 + 10 and> 15 + 10 I am at 1400 but without being sure to be able to maintain myself at this level.
For every 100 ELO it is also correct:

Against the 1500/1600 I scored about 20/25%
Against the 1400/1500 approximately 40/45%
Against 1300/1400> 50%
etc ...

On the other hand individually there are big anomalies of classification.
For example Marius who is valued in the 1100s plays much better than that! OR then sacred bad luck against him! From memory against him I scored only +2 -6 or so. There is another 1100 also very undervalued.
Mariah also 1400 very undervalued also roughly like Marius.
Conversely Josh 8 years 1600 is overestimated I am on a level playing field with him and he does not have powerful finals as indicated I won in the final when I am very bad. Same thing for Amo and Aaron who are between 1550/1600.

When Chessmaster plays faster his level is much lower.
The proof is that in 5 + 5 I am at 1450 while on Chess.com I fall to 1050/1100. So the chosen cadence may have had an impact as well.

Against Cal I only found one game that I won so probably overvalued yes. In the information concerning him it is indicated that he makes sacrifices not always happy.

Unfortunately, the software is bugging quite often and a game out of 10 or 20 cannot be completed

0peoplelikethis

1500s and 1700s do exactly that. Blundering pieces every other game.

jjlai1111

oh ok happy.png

KnightChecked

That AI was definitely not playing at a 1500 level.

That was closer to 800 or so.

KnightChecked
0peoplelikethis wrote:

1500s and 1700s do exactly that. Blundering pieces every other game.

1500s will blunder pieces from miscalculation, yes. But they won't just place pieces en prise like that.

The AI in the original post played basically like this:

 

0peoplelikethis

Fair enough. 3..Qd3 is nonsense.

GrandioseStrategy

Chessmaster is just a video game. No human 1500 will give a  whole piece like that.

Bruno5979
KnightChecked a écrit :
0peoplelikethis wrote:

1500s and 1700s do exactly that. Blundering pieces every other game.

1500s will blunder pieces from miscalculation, yes. But they won't just place pieces en prise like that.

The AI in the original post played basically like this:

 

 

Fake game ???

Even Cassie Chessmaster 23 ELO plays much better than this. Once Cassie won against my girlfriend 700/800 ELO. 

DooshKanoo

Chessmaster bots' ratings can seem reasonbly accurate, but then you run into one that suddenly plays like it's WAY better or worse than its rating. It's like some of them have memory and if you beat them enough times, the engine decides it's time to randomly have them play best moves for 20 moves in a row. And I'm not talking high rated bots here! When I started playing chess, I decided to work through every Chessmaster bot, only moving to the next one when I beat the current one 10 times in a row, with no losses or draws, so I was in a position to REALLY notice their quirks.
 In the end I quit cus ain't nobody got time for that many games. Especially as it seems they start taking ages to make a move at some point.
 IIRC Eddie was a particularly strange one. It would play perfectly for a while, then make an incredibly stupid blunder, rather than spreading it's ability more naturally and evenly.
 Others "feel" a lot more consistent, but you get that occasional one that can be much tougher than the next 20 above it.
 And SOME, I found, could be beaten every time with the exact same sequence of moves. As long as you found the sequence, the game played out the same every time. It was the exact same game, move-for-move. But if you deviated from the sequence at all, it would start branching out again and regain a semblance of free will/randomness. Don't quote me, but I think Cassie was one of those. I only ever found 2, but there could be a lot more. These sequences weren't necessarily perfect play. There were others that would fall into a sequence as long as you got to a certain position a certain number of moves in. As long as they played the first 3 or 4 moves that you wanted them to, you could have literally premoved the rest of the game from that point. And I'm not talking about forcing moves, either.
 Imagine how much time you'd need to spend playing, to even stumble on something like that, and discover that it's endlessly repeatable.
 So yeah, when I say the Chessmaster bots are weird, I know what I'm talking about. I'd love to ask the programmers what was up with all that